Instruction Protocol — The IP Fortress
inference-relay enforces a strict Logic-Content Separation model that protects both application intellectual property and user privacy. This document describes the architecture, its guarantees, and what it means for developers building on the library.
The Two-Envelope Architecture
Every request processed by inference-relay is split into two sealed envelopes before it leaves your application boundary. These envelopes are never merged, never co-transmitted, and never stored together.
Envelope 1 — Content
The actual prompt, completion, and context. This is owned entirely by the user. Content passes directly from your application to the provider endpoint. It never touches relay infrastructure, is never logged by the library, and is never accessible to any intermediary.
Envelope 2 — Logic
Execution parameters, routing rules, and operational metadata. This envelope is synchronized via RS256-signed payloads from the Protocol Authority. It governs how the request is routed, which provider handles it, what cost constraints apply, and how failures are recovered — but it contains zero information about what the user is actually saying.
This separation is not a convention. It is enforced architecturally. The two envelopes travel through different code paths with different access controls, and the library provides no API surface to reunite them.
Tokenized Schema Aliases
The Logic envelope does not use human-readable field names. Instead, it uses Tokenized Schema Aliases — opaque identifiers that map internal execution parameters to provider-specific formats.
These aliases serve two purposes:
- Portability — The same alias resolves to different provider-specific parameters depending on the active routing target. Your application never hardcodes provider-specific details.
- Opacity — An observer who intercepts the Logic envelope sees only opaque tokens. The mapping between those tokens and their semantic meaning is held exclusively in the signed manifest.
The alias mapping is synchronized from the Protocol Authority via signed manifests. If an alias changes — because a provider updates its API surface, or because a new routing rule takes effect — the manifest update propagates automatically. Your application code does not change.
Execution Context Log Auditing
Even if a user or operator inspects their local process logs, they see only the Analytical Frame— the operational metadata envelope (provider, model, token counts, cost, duration) that reveals nothing about your application's proprietary logic.
What IS visible in relay logs
- Provider used (e.g., Anthropic, Ollama, OpenAI)
- Model identifier
- Token counts (input, output)
- Cost (USD)
- Request duration
What is NOT visible in relay logs
- Prompt content
- Completion content
- System prompt content
- Tool schemas or definitions
- Custom workflow logic
- Routing decision rationale
This boundary is absolute. The library does not offer a debug mode or configuration option that would cause Content envelope data to appear in process logs.
How It Works
The Logic-Content Separation handshake follows this sequence:
- Asymmetric Handshake — On startup, the library performs an asymmetric key exchange with the Protocol Authority. This establishes a secure channel for manifest delivery.
- Manifest Delivery — The Protocol Authority returns a signed manifest containing the current set of Tokenized Schema Aliases, routing rules, and operational parameters.
- Signature Verification — The library verifies the manifest signature using the embedded RS256 public key. A manifest that fails verification is rejected entirely.
- Alias Resolution— Execution parameters in outbound requests are decoded using the manifest's alias mappings. The resolution happens at the boundary — inside the library, before the request reaches any provider SDK.
- Graceful Degradation — If the manifest cannot be retrieved or verified, the library falls back to the Last Known Good configuration — the most recent manifest that passed verification.
- Protocol Integrity Enforcement — After 3 consecutive manifest verification failures, the library enters
SEC_DEGRADEDstate. In this state, requests still execute using the Last Known Good configuration, but the library emits a diagnostic event that your application can observe and act on.
At no point in this sequence does Content envelope data participate. The handshake, manifest, and alias resolution operate exclusively on the Logic envelope.
What This Protects
The Logic-Content Separation model provides four distinct guarantees:
Application IP
Your system prompts, tool definitions, and workflow logic are never embedded in relay infrastructure. They exist only in your application code and in the Content envelope, which is transmitted directly to the provider. No intermediary — including inference-relay itself — has access to this material.
User Privacy
Prompt and completion content never leaves the user's machine when using the Native Subscription Gateway. For direct-to-provider routes (Anthropic, OpenAI), content travels over TLS directly to the provider endpoint with no relay intermediary.
Manifest Integrity
Forged or tampered manifests are cryptographically rejected. The RS256 signature verification ensures that only the Protocol Authority can issue valid manifests. A compromised network path cannot inject malicious routing rules or alias mappings.
Operational Opacity
External observers — including relay operators, network intermediaries, and anyone with access to process logs — cannot reconstruct your application's proprietary logic from telemetry data. The Analytical Frame reveals operational metrics only. The semantic meaning of those operations is protected by the Tokenized Schema Alias layer.
Developer Guidance
The Instruction Protocol operates at all three integration levels, with increasing degrees of developer control:
Level 1 — Auto-Patch
import 'inference-relay/auto';
Logic-Content Separation is automatic. No configuration needed. The manifest handshake happens on first request, and alias resolution is transparent. Your existing Anthropic SDK code gains IP protection without any code changes.
Level 2 — Explicit Client
import { InferenceRelay } from 'inference-relay';
const client = new InferenceRelay({ /* config */ });Same protection as Level 1, with additional control over routing and provider selection. You can observe manifest state, react to SEC_DEGRADED events, and configure fallback behavior.
Level 3 — Environment Variable
INFERENCE_RELAY_ENDPOINT=https://relay.internal.corp
Enterprise IT manages the relay boundary. Application code is completely unaware that inference-relay is in the path. Logic-Content Separation is enforced at the infrastructure level, outside the application's trust boundary.
At every level, the guarantee is the same: your content stays yours, your logic stays opaque, and the relay infrastructure sees only what it needs to route the request.
Continue reading: Security Architecture for the full technical treatment of the Dumb Pipe Guarantee and signed trust chain.