Platform · Governance & Compliance

Policies your auditor can read.
Evidence you can export.

Set the policy once. Apply it across every model, agent, and connector. Map your controls to the framework of choice — EU AI Act, NIST AI RMF, ISO 42001, SOC 2, DORA, FCA SYSC. When a regulator asks for evidence, regenerate the pack in minutes, not weeks.

AI inventory · the question every CISO is now getting

"Show me every AI component in Product X."

A regulator under the EU AI Act. A bank customer doing third-party due diligence on your SaaS. A board paper before an internal audit. The question is the same, and most teams answer it with a six-week scavenger hunt across cloud accounts, SSO consents, vendor invoices, and Slack threads. AI Warden answers it on one page.

Tuesday, 09:14 — inbound from a Tier-1 bank's vendor-risk team

“Under our model risk management policy and the EU AI Act, we need a complete inventory of every AI/ML component used in FinSight Reporting before we renew. Foundation models, agents, embeddings, vector stores, MCP servers, third-party APIs — with the data classes each touches and the controls operating against each. We need it by end of week.”

— Director, Third-Party Risk · Annex III filing reference TPR-2826

Without AI Warden

Six weeks of forensic accounting.

  • Engineering owners hand-list every model their service calls. They miss the embeddings job that runs in batch.
  • FinOps pulls cloud bills to find Bedrock, Azure OpenAI, Vertex AI charges nobody remembered registering.
  • Security greps the OAuth grant log for AI-shaped vendors — finds three Copilot-class tools nobody documented.
  • Compliance writes the answer in a Word document. By the time it's signed, two more models have shipped.
  • Next quarter, the same question arrives from a different customer. Start again.

With AI Warden

One product page. Live. Always.

  • Cloud-asset scanners crawl AWS, Azure, GCP, and your IDP every hour. Every Bedrock model, Azure OpenAI deployment, Vertex endpoint, and Foundry agent is auto-discovered.
  • New AI assets land in an approval queue. Nothing joins a Product until an owner accepts it — with the diff, the approver, and the timestamp on the audit log.
  • Each Product has the controls and policies it inherits, the data classes it processes, and a real-time coverage score against EU AI Act, NIST AI RMF, ISO 42001, SOC 2.
  • Export the inventory pack as a signed PDF + JSON in two clicks. Sign it with the same key that signs the audit log.
  • The next regulator question is a refresh, not a project.

What gets inventoried

From the cloud control plane down to the third-party API.

The platform builds the graph automatically. You curate ownership and Product membership; the discovery, the data-class tagging, and the control-mapping are continuous.

  • Foundation models — Bedrock, Azure OpenAI, Vertex AI, Foundry, on-prem (vLLM, Ollama).
  • Agents & copilots — registered in the agent registry, with their tools and scopes.
  • MCP servers & connectors — the tools your agents can actually reach.
  • Embeddings, vector stores, RAG indices — including the data they were built from.
  • Third-party AI SaaS — discovered via OAuth grants, egress logs, and SSO consent.
  • Shadow AI — cloud-account assets nobody registered yet, surfaced for approval.
Product FinSight Reporting 14 components · 0 unowned
ComponentKindData classState
azure-oai/gpt-4o-eufoundation modelPII · MNPIapproved
bedrock/claude-3-5-sonnetfoundation modelMNPIapproved
finsight-summariseragentinternalapproved
finsight-rag-index (pgvector)vector storePII · MNPIapproved
azure-oai/text-embedding-3-largeembedding modelPIIapproved
mcp/finance-readMCP serverinternalapproved
mcp/sharepoint-policiesMCP serverinternalapproved
vertex-ai/gemini-1.5-profoundation modelunsetpending approval
perplexity-api (egress)3rd-party AI SaaSunsetshadow · review
Inventory pack finsight-2026-05-08.pdf · sig:ed25519 · 142 KB

Policies

Versioned, diffable, blast-radius-aware.

Every policy in AI Warden is a versioned object. Changes are diffs. Diffs go through a four-eyes approval before they take effect. The system shows the blast radius — how many teams, agents, models, and MCP servers the change touches — before anyone clicks apply.

Write the rules in a language your CISO can read.

Policies are declarative YAML, mapped to a documented control catalog. No DSL to learn. Every field is annotated with the framework citation it satisfies, so the policy doc doubles as evidence.

  • Versioned — every change is a diff with an author, a timestamp, and an approver.
  • Blast-radius preview — see the principals, agents, products, and connectors a change touches before it ships.
  • Scoped — apply globally, by team, by environment, or by product.
  • Reversible — one click rollback to any prior version.
  • Tested — replay a policy against the last 24h of traffic before promoting.
Policy diff pol-1287 · v4 → v5 awaiting 2nd approval
# Production LLM egress · risk-aligned
  budget_per_team_usd:   8000
+ budget_per_team_usd:   12000
  models_allowed:
+   - "openai:gpt-4o-mini"
- redact_pii:           false
+ redact_pii:           true

# EU AI Act · Annex III · high-risk 
+ human_oversight:      required
+ log_retention_days:   2555  # 7y

# Blast radius (computed)
  teams:    142
  agents:   3,401
  products: 37

Controls & products

From "we have a policy" to "this control is operating, here is the evidence."

A control is a thing AI Warden does — block PII, cap a budget, require approval. A product is a real system the control protects — a copilot, a chatbot, a research tool. Wire controls to products, and the platform shows you live attestation against every framework you care about.

  • Pre-built control catalog mapped to EU AI Act, NIST AI RMF, ISO 42001, SOC 2 CC, DORA RTS.
  • Coverage scoring — every product gets a real-time score against every framework it owns.
  • Failing controls page the platform team owns. With evidence of the failure and the remediation path.
  • Custom controls — bring your own internal framework alongside the standards.
Product customer-ops · agent-summariser 2 controls failing
ControlStateFramework
Per-team budget capoperatingSOC 2 CC1.5
PII redaction in egressoperatingEU AI Act 10(5)
Human oversight for high-risk routespartialEU AI Act 14
Audit log retention 7yoperatingDORA RTS
Output guardrails on model 4ofailingNIST AI RMF GV-1
Model card on fileoperatingISO 42001 9.2

Four-eyes approvals

Material changes don't take effect alone.

A policy that loosens enforcement, a connector that opens a new data path, a system principal that gains a scope — these are the changes a regulator wants to see attested. AI Warden enforces a four-eyes flow on every one, with the approver, the timestamp, and the diff baked into the audit row.

Policy & firewall changes

Any change to enforcement scope. Any change to PII or secret rules. Any change that touches more than N agents.

System principal scope

New scope grants for an agent. New IDP client created. Anything that lets a non-human reach a new system.

Connector onboarding

New OAuth app, new MCP server, new data sink. Approval request, approver record, attached evidence.

Cloud AI service onboarding

Every newly-discovered Bedrock model, Azure OpenAI deployment, Vertex endpoint or Foundry agent lands in the queue. Two approvers attach a Product, a data class, and an owner before it can be consumed by any agent.

Built-in Azure Policy maker

Decisions made in AI Warden, enforced at the cloud control plane.

An approval inside AI Warden shouldn't stop at our audit log — it should reach down into the cloud and prevent anyone from quietly deploying around it. The platform turns approved decisions into Azure Policy definitions and assignments, deployed straight to your management group or subscription. Deny the unapproved. Audit the in-flight. Append & modify the missing tags.

  • Generated, not hand-written. Approve a model in AI Warden → the corresponding Azure Policy definition is rendered, version-pinned, and pushed via ARM.
  • Three effects, on purpose. deny for unapproved Microsoft.CognitiveServices/accounts kinds and SKUs, audit for everything else, append to inject your data-class & product tags.
  • Scope follows your hierarchy. Assign at the management-group root for tenant-wide rails, at the subscription for an environment, or at the resource group for a single Product.
  • Drift is reported back. Compliance state from Azure flows into the same evidence pack as the AI Warden audit log — one number, one source of truth.
  • Same pattern, other clouds. AWS Service Control Policies and GCP Organization Policy follow the same generator; pick the cloud, get the right artefact.

Available for Azure today. AWS SCP and GCP Organization Policy generators on the roadmap.

Azure Policy · generated aiw-cogsvc-allowlist · v3 deployed · MG-root
// Generated from AI Warden approval req-4821
// Approvers: a.morgan, l.shah · 2026-05-07T14:02Z
{
  "displayName": "AIW · Allow only approved Azure OpenAI deployments",
  "policyType":  "Custom",
  "mode":        "All",
  "policyRule": {
    "if": {
      "allOf": [
        { "field": "type",
          "equals": "Microsoft.CognitiveServices/accounts/deployments" },
        { "not": { "field": "name",
          "in": [ "gpt-4o-eu", "text-embedding-3-large" ] } }
      ]
    },
    "then": { "effect": "deny" }
  },
  "metadata": {
    "aiw_product":    "FinSight Reporting",
    "aiw_data_class": "PII · MNPI",
    "aiw_evidence":   "req-4821 · audit:0x9f3c…"
  }
}
compliant 1,284 / 1,284 resources in scope last evaluated 38 s ago

Audit & evidence

One signed log behind every claim.

AI Warden writes a structured, signed audit row for every change, every approval, every decision the gateway makes. Rows are append-only, hash-chained, and streamed in near-real-time to your data lake or SIEM. The evidence pack just queries them.

  • Hash-chained — tamper-evident by design. Verify the chain anywhere with the public key.
  • Streaming to S3 / GCS / Azure Blob, Splunk, Sentinel, Elastic, Kafka.
  • Queryable in ClickHouse — the request log dashboard is just SQL views.
  • Long-retention — configurable to 7+ years for sectoral regulators.
  • Pre-built reports — "all PII redactions, last 30 days, by team" in two clicks.
audit · streamlast 60 s
  • 10:42:08approvepolicy pol-1287 v5 by a.morgan
  • 10:42:14redact2 PII matches in route customer-ops/llm
  • 10:42:21budgetteam risk-platform over 90% · alert sent
  • 10:42:33blockSQLi pattern · MCP server finance-read
  • 10:42:48scopeagent credit-risk-summariser within envelope
  • 10:42:57loginm.kovacs · MFA · from 10.42.7.21

Frameworks & standards

Mapped to the frameworks regulators are actually asking about.

Out-of-the-box mappings for the standards that matter today, with a clear path for the ones still being written.

EU AI Act NIST AI RMF 1.0 ISO/IEC 42001 ISO/IEC 27001 SOC 2 (Type II) DORA FCA SYSC GDPR · UK GDPR HIPAA MAS TRM

EU AI Act

Annex III + risk-tiered controls

Pre-built mapping for Articles 9–15: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy & robustness.

NIST AI RMF 1.0

Govern · Map · Measure · Manage

Every AI Warden control tagged with the matching NIST function and category. Generates the AI RMF profile for the products you've registered.

ISO/IEC 42001

AI management system

Clauses 4–10 covered, with policy templates, documented procedures, and the audit log to back them up. Designed for an external certification audit.

SOC 2 Type II

Common Criteria + AI extensions

Map AI Warden controls to your existing CC framework. Most SOC 2 reports already cover most of what we do — we just close the AI-specific gaps.

DORA · FCA SYSC

Operational resilience

ICT third-party risk for LLM providers, model concentration, fallback, recovery objectives. Pre-mapped to the RTS for financial-sector tenants.

Custom frameworks

Your internal standard, alongside

Bring your group risk catalog, your model-governance policy, your three-lines-of-defence labels. AI Warden handles them as first-class peers of the externals.

Access & identity

Identity-rooted RBAC. For people and for agents.

SSO into your IDP — Keycloak, Entra, Okta. Roles map to permissions. Permissions map to surfaces. Every audit row carries the real principal: a person, a service account, or a specific agent.

  • OIDC SSO with optional step-up MFA on sensitive actions.
  • Granular roles — viewer · operator · approver · admin · auditor (read-only audit pane).
  • Just-in-time elevation — break-glass with a time bound and a paper trail.
  • System principals for agents and CI, rooted in the same IDP, same audit story.
RoleSeesCan do
viewerdashboards, request logread
operator+ policies, scannersedit (rules pending approval)
approver+ approval queueapprove · reject (4-eyes)
admin+ tenant config, RBACmanage everything
auditorread-only audit, evidence packsexport · query · sign

Compliance-first conversation

Bring your auditor.

We host working sessions for compliance, risk, and audit teams — separate from the engineering walkthrough. Bring the people who will sign the report. Leave with a draft evidence pack against your framework of choice.