The Governed AI Standard
Secure every AI
interaction
PLCY sits between your application and any LLM. It classifies, redacts, routes, rate-limits, and logs every request — in under 5 milliseconds.
Built for compliance-first enterprises
Interactive Demo
See the pipeline in action.
Run real enterprise scenarios through the PLCY 5-step pipeline. Watch PII get classified and redacted, injections get blocked, and responses get streamed — all in real time. Or try it with your own data using the custom input mode.
Healthcare
HIPAA compliance — PHI detection, MRN redaction
Fintech
PCI-DSS & SOX — card number masking, account protection
Dev Tools
SOC2 — API key detection, secret scanning
The problem
AI adoption without governance
is a compliance liability.
Every enterprise shipping AI products faces the same four risks. PLCY solves all of them in a single pipeline.
PII leaks to third-party models
Social security numbers, medical records, and credit card numbers get sent to OpenAI, Anthropic, and other providers — with no way to trace or recall them.
Prompt injection bypasses guardrails
Attackers embed instructions in user input to extract system prompts, training data, or internal APIs. Most AI wrappers have zero injection detection.
No audit trail for compliance
When regulators ask 'what data was sent to which model, when, and why?' — most teams can't answer. Every unanswered question is a finding.
Shadow AI with no cost controls
Developers spin up AI integrations without budget caps or request limits. A single runaway loop can burn through thousands in API credits.
Every field classified and redacted
PLCY detects and masks PII, PHI, financial data, and secrets before payloads leave your network. The model never sees raw data.
Injection detection and blocking
Deterministic pattern matching catches prompt injection, jailbreak attempts, and social engineering patterns — and blocks them before the model is invoked.
Complete audit trail with OTel traces
Every request generates an OpenTelemetry trace with before/after snapshots, model routing decisions, and compliance annotations.
Token budgets and cost governance
Per-user and per-vertical rate limits, token budgets, and cost caps. Track spend per request and kill runaway loops automatically.
How it works
Five deterministic steps.
Every request. Every response.
The PLCY pipeline runs in under 5ms with zero ML inference in the hot path. Each step is auditable, configurable, and compliance-aware.
Classify
Detect PII, secrets, prompt injection, and toxicity patterns in every request using deterministic regex and pattern matching.
Redact
Mask all sensitive data — SSNs, credit cards, API keys, medical record numbers — before the payload leaves your network.
Route
Apply industry-specific policies to select the right model and region. HIPAA workloads stay in us-east-1. PCI data never leaves your VPC.
Rate-Limit
Enforce per-user token budgets, request quotas, and cost caps. Track estimated spend per request in real time.
Log
Generate OpenTelemetry traces for every interaction. Scan AI responses for leaked PII. Produce tamper-proof audit records.
Architecture
Deploys as a sidecar.
Zero code changes.
PLCY runs alongside your application as a Kubernetes sidecar or standalone proxy. Every AI request passes through the 5-step pipeline transparently.
Kubernetes sidecar
Single YAML manifest, runs in the same pod
Transparent proxy
No SDK, no code changes — just point your AI client at PLCY
Multi-provider
Works with OpenAI, Anthropic, Azure, Bedrock, and self-hosted models
Why PLCY
Enterprise AI governance
that deploys in minutes.
Built for security teams who need to ship AI products without compromising on compliance or adding months to the timeline.
Deterministic regex and pattern matching — zero ML inference in the critical path. Runs as a sidecar, not a network hop.
HIPAA, PCI-DSS, and SOC2 policies ship out of the box. Each vertical gets its own model allowlist, region constraints, and audit rules.
Every request generates an OpenTelemetry trace with before/after payload snapshots, model selection rationale, and compliance annotations.
Deploy as a Kubernetes sidecar — no application code changes. Works with any LLM provider: OpenAI, Anthropic, Azure, self-hosted.
Full source code available. No vendor lock-in, no phone-home telemetry. Self-host or use the managed service.
Route requests to region-specific model endpoints based on compliance requirements. HIPAA workloads never leave us-east-1.
FAQ
Common questions
What does PLCY actually do to my data?+
PLCY intercepts every AI request/response and runs it through a 5-step pipeline: (1) Classify — detect PII, secrets, and injection attempts, (2) Redact — mask all sensitive data before it reaches the LLM, (3) Route — select the right model based on your compliance policy, (4) Rate-Limit — enforce token budgets and request quotas, (5) Log — create a full audit trail with trace IDs. Your original data never reaches the LLM unprotected.
How much latency does PLCY add?+
The pipeline adds < 5ms of overhead for classification and redaction. The majority of response time is the LLM call itself, which PLCY does not slow down. Streaming support ensures tokens arrive in real-time.
Which compliance frameworks does PLCY support?+
Currently: HIPAA (healthcare), PCI-DSS and SOX (fintech), and SOC 2 (developer tools). Each vertical gets its own policy engine with industry-specific PII types, system prompts, model routing, and audit requirements.
Can I deploy PLCY on-premise?+
Yes. PLCY is designed as a Kubernetes sidecar or standalone proxy. The Go backend is a single binary with no external dependencies beyond your LLM provider. Enterprise customers get VPC deployment support.
Does PLCY work with models other than OpenAI?+
The routing engine is model-agnostic. Currently the demo uses OpenAI, but the architecture supports Anthropic, Google, Azure OpenAI, and self-hosted models. The policy engine determines which model each vertical is allowed to use.
How much does PLCY cost?+
PLCY has a free Starter tier with up to 1,000 requests per month — perfect for evaluation and development. Enterprise pricing is custom based on request volume, verticals, and deployment requirements. Contact sales for a quote.
Is PLCY open source?+
The core pipeline is Apache 2.0 licensed. Enterprise features (custom policy engine, advanced secret scanning, dedicated support, and compliance reporting) require an Enterprise license.