KQL-First Agent Design and SCU Optimization

SCU Billing and Why Agent Design Matters
Microsoft Security Copilot is billed in Security Compute Units (SCUs). Every time you prompt it — ask a question, request a summary, investigate an alert — you consume SCUs. Most organisations start by using Copilot the way it looks in demos: type a question, get an answer. That works for exploration. For repeatable operations work, it gets expensive quickly.
The default assumption is that Copilot’s AI does the heavy lifting. In practice, Copilot doesn’t have to be the brain — it just needs to be the trigger. The intelligence can live in pre-built KQL, and Copilot simply executes it.
This post covers the design principles behind OSCAR (Operations Security & Compliance Automated Reporter) — a Security Copilot agent built to run 100+ compliance checks daily across NIST CSF 2.0, NIST 800-53, and CIS Controls v8, while consuming only ~7.5% of the free 400 SCU monthly allocation.
Understanding SCUs
Before building anything, understand what you’re spending.
- 1 SCU ≈ 1 agent skill execution — each KQL skill called by your agent consumes roughly 1 SCU
- 400 SCUs/month are included free with eligible Microsoft licences — sufficient for meaningful automation if used efficiently
- Natural language prompting is the expensive path — asking Copilot to “analyse my authentication logs” triggers multiple reasoning steps, each burning SCUs
The trade-off: natural language prompts are flexible but costly; KQL skills are precise and cheap. For repeatable, scheduled work — compliance reporting, daily threat checks, audit trail generation — pre-built KQL skills are the better choice.
The KQL-First Design Principle
The core idea is simple: move intelligence into KQL, use Copilot only as the orchestration layer.
In a traditional Copilot workflow, you ask a question and the AI figures out what data to look at, what query to run, and how to interpret it. Each step burns SCUs. In a KQL-first agent:
1. All detection logic lives in pre-built KQL skills — the query already knows exactly what to look for, which tables to query, which fields matter
2. Security Copilot executes the skill — one SCU, the KQL runs against your Sentinel/Log Analytics workspace, results come back as structured JSON
3. Logic Apps handle persistence — results flow automatically to a custom Sentinel table (ComplianceReports_CL) without further AI involvement
The AI isn’t analysing your data. It’s calling a function that does — and that function runs in Log Analytics, not in Copilot’s compute. This distinction is what makes the economics work.
OSCAR Architecture

Four components, each with a single responsibility:
- OSCAR agent (
agent-manifest.yaml) — 13 KQL skills mapped to compliance controls, one Agent skill as the orchestrator - Azure Logic App — schedules daily execution, calls the Copilot API, strips the JSON from markdown code fences, writes to Log Analytics. Cost: ~$0.01/day on Consumption tier
- ComplianceReports_CL — custom Sentinel table storing every finding with control ID, framework, severity, and remediation flag. Retention: 90 days
- Sentinel Workbooks — executive compliance scorecard, control status matrix, remediation tracker — all built on KQL queries against the custom table
No proprietary storage. No separate database. Everything queryable from Sentinel.
Building KQL Skills in the Agent Manifest
The agent manifest YAML format for KQL skills is straightforward:
- Format: KQL
Skills:
- Name: FailedAuthenticationReport
DisplayName: Failed Authentication Attempts Report (AC-7, CIS-5.1)
Description: Detect failed authentication attempts indicating brute force attacks
Settings:
Target: Sentinel
Template: >-
let timeRange = 24h;
let findings = SigninLogs
| where TimeGenerated > ago(timeRange)
| where ResultType != 0
| summarize
FailedAttempts = count(),
FirstAttempt = min(TimeGenerated),
LastAttempt = max(TimeGenerated),
Locations = make_set(Location),
IPAddresses = make_set(IPAddress)
by UserPrincipalName
| where FailedAttempts >= 5
| extend
ControlID = "AC-7",
Framework = "NIST_800_53",
Severity = "High",
RemediationRequired = "true"
| project TimeGenerated = now(), UserPrincipalName,
FailedAttempts, Locations, IPAddresses,
ControlID, Framework, Severity, RemediationRequired
Three things to notice:
Target: Sentinel— the KQL runs directly against your Log Analytics workspace, not inside Copilot’s reasoning engine- Control metadata is embedded in the query —
ControlID,Framework,Severityare added as computed columns so every result row carries its compliance context - The output schema is consistent — every skill returns the same column structure, making it trivial to union results into a single compliance table
The “No Findings” Pattern
Compliance reporting has a requirement that pure detection doesn’t: you need evidence that you checked, even when everything is clean. An empty result set doesn’t prove the query ran — it just looks like missing data.
The solution is a union that guarantees at least one row:
let findings = SigninLogs
| where TimeGenerated > ago(24h)
| where ResultType != 0
| summarize FailedAttempts = count() by UserPrincipalName
| where FailedAttempts >= 5
| extend FindingType = "Suspicious Activity";
let hasResults = toscalar(findings | count) > 0;
union findings,
(print placeholder = 1
| where not(hasResults)
| extend FindingType = "No Findings", UserPrincipalName = "N/A"
| project-away placeholder)
When no suspicious logins exist, the query returns a single No Findings row with the current timestamp. Your compliance workbook always shows the control was checked. Your auditors always have evidence. The Logic App always has something to write to ComplianceReports_CL.
This pattern is essential for any automated compliance use case. Without it, clean environments look identical to broken automation.
SCU Cost Breakdown
OSCAR’s daily run executes 13 KQL skills via one agent orchestrator call:
| Execution | SCU Cost |
|---|---|
| Agent orchestrator (1 call) | ~2 SCUs |
| 13 KQL skills × ~2 SCUs each | ~26 SCUs |
| Daily total | ~28-30 SCUs |
| Monthly total | ~870 SCUs |
That exceeds the free 400 — but not all skills run every day. OSCAR uses report groups to control scope:
daily_critical— 8 controls, runs daily (~16 SCUs)weekly_compliance— 7 controls, runs weekly- Domain-specific groups (identity, threats, audit) — run on schedule
Tuned to daily critical + weekly full sweep: ~500 SCUs/month, achievable within a 1-SCU provisioned capacity. The 7.5% figure applies to the critical-only daily run. Full coverage requires modest provisioning — still far cheaper than ad-hoc prompting at scale.
Security Domains Covered
OSCAR’s 13 skills span the domains that matter for compliance reporting:
| Domain | Example Controls | Data Source |
|---|---|---|
| Identity & Access | AC-2, AC-7, IA-2, CIS-5.x | SigninLogs, AuditLogs |
| Threat Detection | SI-3, SI-4, DE.AE-02 | SecurityAlert, SecurityIncident |
| Audit & Logging | AU-2, AU-6, AU-12, CIS-8.x | AuditLogs, AzureActivity |
| Vulnerability Management | SI-2, CIS-16.x, CIS-18.x | Update, SecurityRecommendation |
| MITRE ATT&CK | Multiple tactics/techniques | SecurityAlert |
Each skill returns results tagged with control IDs from NIST CSF 2.0, NIST 800-53 Rev 5, and CIS Controls v8 simultaneously — the same finding maps to all three frameworks in a single query pass.
Extending the Pattern
The OSCAR architecture applies beyond compliance reporting. Any repeatable security operations workflow fits this model:
- Daily threat hunting — pre-built hunting queries as KQL skills, Copilot triggers them on schedule, results land in Sentinel for analyst review
- Incident enrichment — Logic App fires on new high-severity incident, calls a Copilot skill that runs context-gathering KQL, posts enriched findings back to the incident
- SLA monitoring — query open incidents by age, flag breaches, push to a Sentinel table that feeds an operations workbook
The pattern is always the same: express the detection logic in KQL, register it as a skill, let the Logic App be the scheduler, let Sentinel be the store.
Summary
Key Takeaways:
- SCUs are consumed per AI interaction — natural language prompting at scale gets expensive fast
- KQL-first agent design pushes intelligence into pre-built queries; Copilot becomes the executor, not the reasoner
- The “No Findings” union pattern guarantees audit trail evidence even when controls are passing
- Azure Logic Apps handle scheduling and data persistence cheaply (~$0.01/day), keeping the architecture entirely within the Microsoft stack
- Compliance coverage across three frameworks (NIST CSF 2.0, NIST 800-53, CIS Controls v8) is achievable within free SCU tiers when daily scope is managed through report groups
Next Steps:
- Review the Security Copilot custom plugin documentation to understand the agent manifest format
- Identify your top 5 repeatable SOC queries — these are your first KQL skills
- Deploy a test Logic App with static data before connecting to Copilot, to validate the JSON pipeline without burning SCUs
References
Microsoft Documentation:
- Security Copilot Overview: https://learn.microsoft.com/en-us/copilot/security/microsoft-security-copilot
- Security Copilot Plugin Development: https://learn.microsoft.com/en-us/copilot/security/plugin-overview
- Logic Apps Documentation: https://learn.microsoft.com/en-us/azure/logic-apps/
- KQL Reference: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/
Compliance Frameworks:
- NIST Cybersecurity Framework 2.0: https://www.nist.gov/cyberframework
- NIST SP 800-53 Rev 5: https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final
- CIS Controls v8: https://www.cisecurity.org/controls/v8
Related Tools:
- Azure Monitor Log Analytics: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-overview
- Microsoft Sentinel Workbooks: https://learn.microsoft.com/en-us/azure/sentinel/monitor-your-data