Policy Governed AI Gateway
Secure every AI
interaction.
PLCY sits between your application and any LLM. It classifies, redacts, routes, rate-limits, and logs every request — in under 5 milliseconds.
Built for compliance-first enterprises
Interactive Demo
See the pipeline in action.
Run real enterprise scenarios through the PLCY 5-step pipeline. Watch PII get classified and redacted, injections get blocked, and responses get streamed — all in real time. Or try it with your own data using the custom input mode.
Healthcare
HIPAA compliance — PHI detection, MRN redaction
Fintech
PCI-DSS & SOX — card number masking, account protection
Dev Tools
SOC2 — API key detection, secret scanning
The problem
AI adoption without governance
is a compliance liability.
Every enterprise shipping AI products faces the same four risks. PLCY solves all of them in a single pipeline.
PII leaks to third-party models
Social security numbers, medical records, and credit card numbers get sent to OpenAI, Anthropic, and other providers — with no way to trace or recall them.
Prompt injection bypasses guardrails
Attackers embed instructions in user input to extract system prompts, training data, or internal APIs. Most AI wrappers have zero injection detection.
No audit trail for compliance
When regulators ask 'what data was sent to which model, when, and why?' — most teams can't answer. Every unanswered question is a finding.
Shadow AI with no cost controls
Developers spin up AI integrations without budget caps or request limits. A single runaway loop can burn through thousands in API credits.
Every field classified and redacted
PLCY detects and masks PII, PHI, financial data, and secrets before payloads leave your network. The model never sees raw data.
Injection detection and blocking
Deterministic pattern matching catches prompt injection, jailbreak attempts, and social engineering patterns — and blocks them before the model is invoked.
Complete audit trail with OTel traces
Every request generates an OpenTelemetry trace with before/after snapshots, model routing decisions, and compliance annotations.
Token budgets and cost governance
Per-user and per-vertical rate limits, token budgets, and cost caps. Track spend per request and kill runaway loops automatically.
How it works
Five deterministic steps.
Every request. Every response.
The PLCY pipeline runs in under 5ms with zero ML inference in the hot path. Each step is auditable, configurable, and compliance-aware.
Classify
Detect PII, secrets, prompt injection, and toxicity patterns in every request using deterministic regex and pattern matching.
Redact
Mask all sensitive data — SSNs, credit cards, API keys, medical record numbers — before the payload leaves your network.
Route
Apply industry-specific policies to select the right model and region. HIPAA workloads stay in us-east-1. PCI data never leaves your VPC.
Rate-Limit
Enforce per-user token budgets, request quotas, and cost caps. Track estimated spend per request in real time.
Log
Generate OpenTelemetry traces for every interaction. Scan AI responses for leaked PII. Produce tamper-proof audit records.
Architecture
Deploys as a sidecar.
Zero code changes.
PLCY runs alongside your application as a Kubernetes sidecar or standalone proxy. Every AI request passes through the 5-step pipeline transparently.
Kubernetes sidecar
Single YAML manifest, runs in the same pod
Transparent proxy
No SDK, no code changes — just point your AI client at PLCY
Multi-provider
Works with OpenAI, Anthropic, Azure, Bedrock, and self-hosted models
Why PLCY
Enterprise AI governance
that deploys in minutes.
Built for security teams who need to ship AI products without compromising on compliance or adding months to the timeline.
Deterministic regex and pattern matching — zero ML inference in the critical path. Runs as a sidecar, not a network hop.
HIPAA, PCI-DSS, and SOC2 policies ship out of the box. Each vertical gets its own model allowlist, region constraints, and audit rules.
Every request generates an OpenTelemetry trace with before/after payload snapshots, model selection rationale, and compliance annotations.
Deploy as a Kubernetes sidecar — no application code changes. Works with any LLM provider: OpenAI, Anthropic, Azure, self-hosted.
No vendor lock-in, no phone-home telemetry. Deploy on your own infrastructure or use the managed service.
Route requests to region-specific model endpoints based on compliance requirements. HIPAA workloads never leave us-east-1.