Introducing Capability Observability

See Every Capability. Control Every Cost. Govern Every Agent.

SkillTrace is the observability layer for AI capabilities — a new category of DevOps that reveals which skills power your AI systems, where your LLM spend originates, and what risks are hiding in production.

Top Capabilities

github-pr-review
jira-create-ticket
sql-query
slack-notify

LLM Cost Attribution

$12.4K

attributed this week

sql-query
$4.2K
pr-review
$3.1K
ticket-create
$2.8K

Risk Monitor

filesystem-delete

+2,400% usage spike

db-admin-query

Unusual off-hours activity

247 capabilities nominal

Your AI Platform Has a Blind Spot

You can observe your infrastructure. You can trace your prompts. You can monitor your models. But you cannot see the capabilities that drive your AI systems.

What You Can Observe

  • InfrastructureLatency, uptime, error rates
  • ApplicationsAPM, traces, logs
  • ModelsToken usage, prompt tracing, LLM latency

What You Cannot

Capabilities

Which skills, tools, and capabilities are your AI agents actually invoking in production?

Questions you cannot answer today

?Which skills are your agents actually invoking?
?Which capabilities are consuming 80% of your LLM budget?
?Which tools spiked 2,400% overnight — and why?
?Are deprecated skills still running in production?

Without Capability Observability, managing AI at scale is flying blind.

Introducing Capability Observability

A new layer of DevOps for AI-native platforms. Just as Datadog made infrastructure observable, SkillTrace makes AI capabilities observable.

Infrastructure Observability
Datadog, CloudWatch, Prometheus
Application Observability
New Relic, Sentry, Honeycomb
Model Observability
LangSmith, Helicone, Braintrust
Capability Observability
SkillTrace

The missing layer in your AI operations stack

Skill Usage

Which capabilities power your AI platform in real time

Cost Attribution

Which capabilities drive your LLM spend — down to the skill

Capability Lifecycle

Which skills are deprecated, unused, or drifting from baseline

Operational Risk

Which sensitive capabilities are spiking without explanation

Zero-Instrumentation Detection

SkillTrace runs inside your AI gateway and passively inspects prompts. No agent changes. No SDK integration. Deploy once at the gateway.

Agents

AI Gateway

SkillTrace

LLM Provider

Telemetry Store

JSONL / SQLite / HTTP / Kafka

<5ms

p99 detection latency

Zero

prompt modification

Fire & forget

telemetry emission

Detection PipelineShort-circuits on first match

1.

Watermark Scan

(microseconds)

Detects skill:// watermark comments embedded in prompt content

2.

Frontmatter Hash

(sub-millisecond)

SHA-256 hash of YAML frontmatter block against registry

3.

Content Hash

(< 1ms)

SHA-256 hash of full prompt content for exact matching

4.

Prefix Hash

(< 1ms)

SHA-256 hash of first 500 tokens for partial matching

Watermark example

<!-- skill://acme.github/pr-review@1.2.0 -->

You are a code review assistant...

A lightweight HTML comment embedded in the prompt. Detection takes microseconds with zero impact on LLM behavior.

Your Capability Command Center

Total visibility into every capability running across your AI platform. No blind spots. No guesswork.

Capability Usage

Active

1,247

skills detected today

+12% vs last week

Know exactly which skills power your AI platform — in real time

Cost Attribution

Spend

$3.2K

attributed to top 5 skills

sql-query: 34% of total

Trace every dollar of LLM spend back to the capability that drove it

Capability Drift

Drift

18

deprecated skills still active

4 scheduled for removal

Surface deprecated, unused, or drifting skills before they become incidents

Operational Risk

Risk

3

anomalies flagged this week

filesystem-delete: +2,400%

Detect anomalous spikes in sensitive capabilities instantly

Deploys Where You Already Run

One middleware. Every gateway. No agent changes.

LiteLLM

Python

Python callback plugin for the most popular AI gateway proxy

Vercel AI SDK

TypeScript

Native LanguageModelMiddleware for Vercel-powered AI apps

Express

Node.js

Drop-in middleware for Express-based OpenAI proxy servers

Cloudflare Workers

Edge

Edge-native adapter for Cloudflare AI Gateway Workers

Deployment typically requires one configuration change. No agent instrumentation required.

Built for Enterprise AI Governance

From multi-tenant gateway routing to capability audit trails — SkillTrace is designed for organizations that take AI operations seriously.

Scale

Built for high-throughput, multi-tenant AI platforms

  • Multi-registry federation
  • Multi-tenant gateway routing
  • Deterministic sampling
  • Hot-reload without restarts

Observe

Deep telemetry across every capability interaction

  • Streaming telemetry pipelines
  • Prometheus metrics endpoint
  • OpenTelemetry integration
  • Async fire-and-forget emission

Govern

Enterprise-grade controls for capability management

  • Capability audit trails
  • Lifecycle management
  • Risk anomaly alerts
  • Read-only prompt inspection

Security by Design

Zero prompt content logging
Read-only inspection — never modifies, blocks, or alters prompts
Registry signature verification
SOC 2 architecture readiness

The DevSecOps Platform for AI Capabilities

From skill creation to production governance — one platform. SkillTrace works alongside skills-check to cover the full capability lifecycle.

01

Develop

skills-check

Author and iterate on AI agent skills

02

Validate

skills-check

Analyze quality, generate fingerprints

03

Detect

SkillTrace

Runtime capability detection at the gateway

04

Analyze

SkillTrace

Usage, cost, drift, and risk analytics

05

Govern

SkillTrace

Lifecycle management and compliance

Deploy in Minutes

One middleware. One configuration. Full capability visibility.

Lightweight gateway middleware

SkillTrace runs as a thin middleware layer inside your existing AI gateway. Point it at your fingerprint registry and telemetry endpoint. Within minutes, every capability flowing through your gateway becomes visible.

  • No agent instrumentation required
  • No prompt modification — ever
  • No performance impact (<5ms p99)
import express from "express";
import { createMiddleware } from "@skills-trace/express";

const app = express();

app.use(
  "/v1/chat/completions",
  createMiddleware({
    registry: "./skills-fingerprints.json",
    emitter: "http://telemetry.internal/api/events",
  })
);

The Organizations That Win Will Be the Ones That See

Every enterprise will deploy hundreds of AI agents. The question isn't whether you'll need Capability Observability — it's whether you'll have it before your first incident.