autrace
Company

Every LLM call deserves
a control layer.

We started Autrace because shipping AI features fast and shipping them safely shouldn't be a trade-off. Most teams skip the control layer entirely - no policy enforcement, no PII guardrails, no audit trail. That's a ticking clock, not a feature.

Autrace is the proxy that sits between your application and any LLM provider. One endpoint change. Policy enforcement, PII filtering, and cryptographically chained logs on every call - no SDK changes, no code rewrites, no trade-offs.

Mission
"Make production AI observable, auditable, and policy-enforced - by default, for every team."

How we work

Ship honest software

We document exactly what Autrace covers and what it doesn't. Our security page lists the 5 OWASP LLM Top 10 categories we address - and the 5 we don't. No marketing theatre.

Default to transparency

Our gateway is open-core. The core proxy, policy engine, and audit trail are MIT-licensed. You can read every line that touches your prompts. No black boxes in the trust boundary.

Security is a feature

PII filtering, policy enforcement, and audit logging aren't add-ons. They're in the critical path of every proxied request. You can't accidentally skip them.

Build for engineers first

One environment variable swap to redirect your LLM calls. Native OpenAI SDK compatibility. Prometheus metrics. Structured JSON logs. Designed to work the way you already work.

Open-core model

The Autrace gateway - core proxy, policy engine, PII filter, audit trail - is MIT-licensed and fully self-hostable. If you want to run it on your own infrastructure, read every line of code, and never pay us a cent, that's a supported use case.

The cloud SaaS adds managed infrastructure, automatic updates, compliance certifications (SOC 2, HIPAA), and support SLAs. You pay for operations and compliance assurance - not for the right to use the software.

Timeline

Q1 2025
Project started

First prototype of the policy-enforced LLM proxy - 3 rules, 1 provider, no logging.

Q2 2025
Core gateway stable

PII filtering, immutable audit trail, and OpenAI-compatible proxy shipped. First closed beta users.

Q3 2025
Multi-provider routing

Unified integration layer: 30+ LLM providers accessible through a single API key. Cost-based routing added.

Q4 2025
Security hardening

SSRF protection, payload size limits, prompt injection detection, OWASP LLM Top 10 audit completed.

Q1 2026
Public launch

Open-core release. Cloud Enterprise SaaS in public beta.

Q2 2026
SOC 2 Type II audit begins

Working with auditors on Type II certification. Target completion Q3 2026.

Q4 2026
HIPAA BAA

Dedicated infrastructure + data residency options + HIPAA Business Associate Agreement.

Open roles

We're a small team building foundational infrastructure for production AI. If that sounds like the right problem, we'd like to talk.

Senior Backend EngineerEngineering
Remote (Global) · Full-time

Improve gateway throughput, build new policy execution primitives, harden the audit trail. You'll work in Node.js/TypeScript with Fastify, PostgreSQL, Redis.

Apply
Security EngineerSecurity
Remote (Global) · Full-time

Own OWASP LLM coverage expansion, drive SOC 2 compliance, build internal security tooling, and be the first reviewer on every change that touches the policy engine.

Apply
Developer RelationsGrowth
Remote (Global) · Full-time

Help engineers understand why LLM security matters before incidents happen. Write technical content, maintain example integrations, and talk to users every day.

Apply

Don't see a role that fits? Send us a note - we're always interested in strong engineers and security practitioners.