Policy Governed AI Gateway

Secure every AI
interaction.

PLCY sits between your application and any LLM. It classifies, redacts, routes, rate-limits, and logs every request — in under 5 milliseconds.

$ plcy scan "Patient MRN#789456, SSN 987-65-4321"
 
[1/5] classify 2 PII detected (mrn, ssn) 0.8ms
[2/5] redact MRN → [PLCY:MRN:f0ff5d], SSN → [PLCY:SSN:a7f3b2]
[3/5] route gpt-5.4 · us-east-1 · HIPAA 0.1ms
[4/5] rate-limit 142 tokens · $0.00002 est. 0.1ms
[5/5] log trace_a1b2c3 · audit complete 0.2ms
 
Pipeline complete. 2 items redacted. 1.5ms total.

Built for compliance-first enterprises

HIPAA
ERISA
SOC 2
NAIC

Interactive Demo

See the pipeline in action.

Run real enterprise scenarios through the PLCY 5-step pipeline. Watch PII get classified and redacted, injections get blocked, and responses get streamed — all in real time. Or try it with your own data using the custom input mode.

Healthcare

HIPAA compliance — PHI detection, MRN redaction

Fintech

PCI-DSS & SOX — card number masking, account protection

Dev Tools

SOC2 — API key detection, secret scanning

The problem

AI adoption without governance
is a compliance liability.

Every enterprise shipping AI products faces the same four risks. PLCY solves all of them in a single pipeline.

Without PLCY

PII leaks to third-party models

Social security numbers, medical records, and credit card numbers get sent to OpenAI, Anthropic, and other providers — with no way to trace or recall them.

Prompt injection bypasses guardrails

Attackers embed instructions in user input to extract system prompts, training data, or internal APIs. Most AI wrappers have zero injection detection.

No audit trail for compliance

When regulators ask 'what data was sent to which model, when, and why?' — most teams can't answer. Every unanswered question is a finding.

Shadow AI with no cost controls

Developers spin up AI integrations without budget caps or request limits. A single runaway loop can burn through thousands in API credits.

With PLCY

Every field classified and redacted

PLCY detects and masks PII, PHI, financial data, and secrets before payloads leave your network. The model never sees raw data.

Injection detection and blocking

Deterministic pattern matching catches prompt injection, jailbreak attempts, and social engineering patterns — and blocks them before the model is invoked.

Complete audit trail with OTel traces

Every request generates an OpenTelemetry trace with before/after snapshots, model routing decisions, and compliance annotations.

Token budgets and cost governance

Per-user and per-vertical rate limits, token budgets, and cost caps. Track spend per request and kill runaway loops automatically.

How it works

Five deterministic steps.
Every request. Every response.

The PLCY pipeline runs in under 5ms with zero ML inference in the hot path. Each step is auditable, configurable, and compliance-aware.

01

Classify

Detect PII, secrets, prompt injection, and toxicity patterns in every request using deterministic regex and pattern matching.

02

Redact

Mask all sensitive data — SSNs, credit cards, API keys, medical record numbers — before the payload leaves your network.

03

Route

Apply industry-specific policies to select the right model and region. HIPAA workloads stay in us-east-1. PCI data never leaves your VPC.

04

Rate-Limit

Enforce per-user token budgets, request quotas, and cost caps. Track estimated spend per request in real time.

05

Log

Generate OpenTelemetry traces for every interaction. Scan AI responses for leaked PII. Produce tamper-proof audit records.

Architecture

Deploys as a sidecar.
Zero code changes.

PLCY runs alongside your application as a Kubernetes sidecar or standalone proxy. Every AI request passes through the 5-step pipeline transparently.

Your Application
|
all AI requests
v
PLCY Pipeline
classifyredactrouterate-limitlog
|
redacted payload
v
OpenAI
Anthropic
Azure
Self-hosted

Kubernetes sidecar

Single YAML manifest, runs in the same pod

Transparent proxy

No SDK, no code changes — just point your AI client at PLCY

Multi-provider

Works with OpenAI, Anthropic, Azure, Bedrock, and self-hosted models

Why PLCY

Enterprise AI governance
that deploys in minutes.

Built for security teams who need to ship AI products without compromising on compliance or adding months to the timeline.

<5ms
Pipeline overhead

Deterministic regex and pattern matching — zero ML inference in the critical path. Runs as a sidecar, not a network hop.

3
Compliance frameworks
HIPAA
PCI-DSS
SOC2

HIPAA, PCI-DSS, and SOC2 policies ship out of the box. Each vertical gets its own model allowlist, region constraints, and audit rules.

100%
Audit coverage

Every request generates an OpenTelemetry trace with before/after payload snapshots, model selection rationale, and compliance annotations.

K8s
Sidecar deployment

Deploy as a Kubernetes sidecar — no application code changes. Works with any LLM provider: OpenAI, Anthropic, Azure, self-hosted.

Zero lock-in
Self-hostable

No vendor lock-in, no phone-home telemetry. Deploy on your own infrastructure or use the managed service.

Geo-fenced
Data residency

Route requests to region-specific model endpoints based on compliance requirements. HIPAA workloads never leave us-east-1.