Replace any API connection in minutes
No API keys. No rotation. No downtime. Connect systems using cryptographic identity instead of bearer tokens. Run alongside your existing API, shift traffic gradually, remove keys when ready.
connect → send
no API keys, ever
run alongside, switch when ready
Deploy in 15 Seconds
Production-ready Full Control setup for paying customers
Replace API Keys with Identity
Start with 3-month free trial. No credit card required.
Start Free Trial $5/mo Basic • $10/mo Middle • $15/mo Enterpriseimport { connect } from '@private.me/xlink'; // Zero-config service connection with automatic discovery const connection = await connect('payments-service'); if (connection.ok) { // Send encrypted message to discovered service await connection.value.agent.send({ to: connection.value.did, payload: { action: 'createCharge', amount: 100 }, scope: 'payments' }); } // Alternative: Explicit pattern for more control const conn = await connect('payments'); await conn.value.agent.send({ to, payload });
import { ok, err, type Result } from '@private.me/shared'; const result = await agent.send({ to: recipientDID, payload: { action: 'createOrder', items: [...] }, scope: 'orders' }); if (result.ok) { // Success: result.value console.log('Envelope ID:', result.value.envelopeId); } else { // Error: result.error console.error('Code:', result.error.code); console.error('Message:', result.error.message); console.error('Hint:', result.error.details?.hint); }
▸ Pattern 2: Manual Setup ADVANCED
For advanced users who need full control over registry, transport, and identity management:
import { Agent, MemoryTrustRegistry, LoopbackTransport } from '@private.me/xlink'; const registry = new MemoryTrustRegistry(); const transport = new LoopbackTransport(); // Create two agents (entity-to-entity communication) const alice = await Agent.quickstart({ name: 'Alice', registry, transport }); const bob = await Agent.quickstart({ name: 'Bob', registry, transport }); // Alice sends to Bob await alice.send({ to: bob.did, payload: { type: 'greeting', text: 'Hello Bob!' }, scope: 'chat', }); // Bob receives const envelope = transport.outbox[0]!; const msg = await bob.receive(envelope); console.log(msg.value.payload); // { type: 'greeting', text: 'Hello Bob!' }
const agent = await Agent.quickstart()
await agent.send({ to, payload, scope })
const msg = await agent.receive(envelope)
Entity ↔ Entity Communication
Both parties have cryptographic identity. Both verify signatures. Both enforce scope-based permissions.
Entity ↔ Entity. Not client → server.
Both have identity. Both verify. Both enforce scope.
Traditional APIs establish asymmetric connections: client (API key) → server (identity). The client authenticates with a secret key, the server has an identity. This creates an inherent power imbalance and single point of compromise.
Xlink establishes entity-to-entity connections where both sides have cryptographic identities and verify each other's messages. No API keys, no asymmetric trust, no centralized gateways — just two independent entities with a bilateral trust relationship.
| Property | Traditional API (Client → Server) | Xlink ACI (Entity ↔ Entity) |
|---|---|---|
| Client identity | API key (bearer token, stateless) | Cryptographic DID (Ed25519 keypair) |
| Server identity | Domain + TLS cert (DNS-based) | Cryptographic DID (same as client) |
| Authentication | Client proves possession of key | Both sides verify signatures (mutual) |
| Authorization | Server-side policy (RBAC, scopes) | Both sides enforce scopes (bilateral) |
| Message integrity | HTTPS (transport-level only) | Per-message signatures (end-to-end) |
| Replay protection | None (stateless tokens) | Nonce store on both sides |
| Compromise radius | Leaked key = global access | Compromised identity = single peer affected |
| Rotation | Ongoing (keys expire, rotate) | One-time setup (identity permanent) |
| Centralized gateway | Required (API gateway, auth server) | Optional (direct peer-to-peer works) |
| Dependency on DNS/PKI | Yes (domain name + CA trust chain) | No (cryptographic DIDs, no DNS) |
The shift from client-server to entity-to-entity removes the asymmetry: both sides have identity, both sides verify, both sides enforce policy. No "client" vs "server" — just two independent entities with a bilateral trust relationship.
Bilateral Authorization (Pro Tier)
Defense-in-depth security where both sender and receiver independently validate scopes — even if one side is compromised, the other side blocks unauthorized operations.
Traditional APIs enforce authorization server-side only. If the server is compromised or misconfigured, it may accept requests with elevated scopes. Xlink's bilateral authorization adds receiver-side scope validation as a second layer of defense.
The receiveScopes parameter defines which scopes a receiver will accept. When a message arrives, the receiver validates that the sender's scope is within the allowed set — independent of what the sender claims or what the trust registry permits. This prevents scope escalation attacks and limits blast radius if sender credentials are compromised.
import { Agent } from '@private.me/xlink'; // Payment processor restricts incoming scopes to payment operations only const processor = await Agent.create({ registry, transport, receiveScopes: ['payment:process', 'payment:refund'] }); // If sender tries to send 'admin:config' scope, receiver rejects it // even if sender has that scope in the trust registry const result = await processor.receive(); if (!result.ok) { console.error('Rejected: scope not in receiveScopes allowlist'); }
Defense-in-depth benefit: Even if an attacker compromises the sender's credentials or the trust registry itself, they cannot execute operations outside the receiver's receiveScopes allowlist. The receiver enforces its own security policy independently.
This pattern is especially valuable in multi-tenant environments, regulated industries (healthcare, finance), and IoT deployments where devices must reject commands outside their operational scope.
Executive Summary
Xlink establishes entity-to-entity connections where both sides have cryptographic identities and verify each other's messages. No API keys, no asymmetric trust, no centralized gateways — just two independent entities with a bilateral trust relationship.
Two functions cover 80% of use cases: Agent.quickstart() generates an Ed25519 + X25519 identity with hybrid post-quantum key exchange (X25519 + ML-KEM-768, always-on) and registers it with default settings — zero configuration required. agent.send() encrypts a payload with AES-256-GCM, signs it with Ed25519 (+ ML-DSA-65 when postQuantumSig: true), and delivers it via any transport adapter. The receiver verifies signatures, checks replay protection, validates scope, and decrypts — automatically. Both sides perform the same verification steps, enforcing mutual trust rather than client-server hierarchy. For advanced use cases, Agent.create() accepts custom registry and transport configuration.
When you need information-theoretic security, split-channel mode shards messages via XorIDA (threshold sharing over GF(2)) and routes each share independently. An attacker who compromises any single channel learns nothing about the plaintext — not computationally hard to break, but mathematically impossible.
This entity-to-entity model removes the architectural assumptions of traditional APIs: no bearer tokens to rotate, no central auth servers to scale, no DNS dependency for trust. Each entity generates its own identity once, registers trusted peers, and communicates directly. The connection is symmetric, decentralized, and quantum-resistant by default.
Zero configuration out of the box. Zero npm runtime dependencies. Runs anywhere the Web Crypto API is available — Node.js, Deno, Bun, Cloudflare Workers, browsers. Dual ESM and CJS builds ship in a single package.
xLink also enables identity-based purchasing — AI agents can purchase ACIs using cryptographic signatures instead of email addresses. The ACI Purchase Endpoint accepts signed xLink envelopes, verifies agent identity, and delivers 6× higher rate limits (60 req/min vs 10 req/min for email-based purchases).
Multi-Agent Communication
Two agents establish cryptographic identities and communicate with full mutual authentication — no API keys, no central gateway, no configuration.
Alice and Bob: Peer-to-Peer Messaging
The following example demonstrates the complete lifecycle: both agents create identities using Agent.quickstart(), register with a shared trust registry, exchange messages with automatic encryption and signature verification, and validate each other's scope permissions.
import { Agent, MemoryTrustRegistry, LoopbackTransport } from '@private.me/xlink'; // Shared infrastructure (in production, use HttpTrustRegistry + HttpsTransportAdapter) const registry = new MemoryTrustRegistry(); const transport = new LoopbackTransport(); // Alice creates her identity const alice = await Agent.quickstart({ name: 'Alice', registry, transport, }); console.log(`Alice DID: ${alice.did}`); // Alice DID: did:key:z6MksZP8ChwZYSNgozYq... // Bob creates his identity const bob = await Agent.quickstart({ name: 'Bob', registry, transport, }); console.log(`Bob DID: ${bob.did}`); // Bob DID: did:key:z6MkpH7eQ2KvYnPWbD... // Alice sends a message to Bob const sendResult = await alice.send({ to: bob.did, payload: { type: 'greeting', text: 'Hello Bob!' }, scope: 'chat', }); if (!sendResult.ok) { throw new Error(`Send failed: ${sendResult.error}`); } // Bob receives and verifies the message const envelope = transport.outbox[0]; const receiveResult = await bob.receive(envelope); if (!receiveResult.ok) { throw new Error(`Receive failed: ${receiveResult.error}`); } console.log(`From: ${receiveResult.value.sender}`); // From: did:key:z6MksZP8ChwZYSNgozYq... console.log(`Payload:` , receiveResult.value.payload); // Payload: { type: 'greeting', text: 'Hello Bob!' } console.log(`Scope: ${receiveResult.value.scope}`); // Scope: chat // Bob replies to Alice await bob.send({ to: alice.did, payload: { type: 'response', text: 'Hi Alice!' }, scope: 'chat', }); // Alice receives Bob's reply const reply = await alice.receive(transport.outbox[1]); console.log(`Bob says: ${reply.value.payload.text}`); // Bob says: Hi Alice!
Service-to-Service Communication
The same pattern applies to backend services. Here's a Payment service communicating with a Billing service:
// Shared infrastructure const registry = new HttpTrustRegistry({ baseUrl: 'https://trust.corp.example.com' }); const transport = new HttpsTransportAdapter({ baseUrl: 'https://relay.corp.example.com' }); // Payment service creates its identity const paymentService = await Agent.quickstart({ name: 'payment-service', registry, transport, }); // Billing service creates its identity const billingService = await Agent.quickstart({ name: 'billing-service', registry, transport, }); // Payment service notifies Billing about a completed transaction await paymentService.send({ to: billingService.did, payload: { transactionId: 'txn_abc123', amount: 99.50, currency: 'USD', customerId: 'cust_xyz789', }, scope: 'payment:notify', }); // Billing service receives and processes the notification const txn = await billingService.receive(envelope); console.log(`Transaction from: ${txn.value.sender}`); console.log(`Amount: $${txn.value.payload.amount}`); // Billing service sends confirmation back to Payment service await billingService.send({ to: paymentService.did, payload: { status: 'recorded', invoiceId: 'inv_2024_001', }, scope: 'billing:confirm', });
Agent.quickstart() handles identity generation, key agreement setup (X25519 + ML-KEM-768), registry registration, and transport initialization. Both agents use the same API — no distinction between "client" and "server". The connection is symmetric, peer-to-peer, and quantum-resistant by default.
What Happens Under the Hood
When Alice calls send():
- Lookup: Queries the trust registry to retrieve Bob's public keys (Ed25519 signing, X25519 key agreement, ML-KEM-768 post-quantum KEM)
- Key Agreement: Performs hybrid ECDH (X25519 + ML-KEM-768 encapsulation) to derive a shared symmetric key
- Encryption: Encrypts the payload with AES-256-GCM using the derived key
- Signing: Signs the envelope with Alice's Ed25519 private key
- Transport: Sends the signed envelope to Bob via the transport adapter
When Bob calls receive():
- Signature Verification: Verifies Alice's Ed25519 signature using her public key from the registry
- Replay Protection: Checks the nonce store to ensure this envelope hasn't been seen before
- Scope Validation: Confirms Alice has permission to use the specified scope
- Key Agreement: Performs hybrid ECDH (X25519 + ML-KEM-768 decapsulation) to derive the same shared key
- Decryption: Decrypts the payload with AES-256-GCM
- Returns Result: Delivers the verified, decrypted payload to the application
Both sides perform the same verification steps — mutual authentication, mutual scope enforcement, mutual replay protection. Neither side has elevated privileges. The connection is truly peer-to-peer.
Developer Experience
Xlink provides real-time progress tracking and 45+ structured error codes to help developers build reliable, debuggable M2M systems.
Progress Callbacks
Both send() and receive() operations support onProgress callbacks for tracking long-running operations, especially useful for split-channel mode where multiple shares are transmitted independently.
const envelope = await agent.send({ to: recipientDid, payload: largeData, onProgress: async (event) => { switch (event.stage) { case 'encrypting': console.log('Encrypting payload...'); break; case 'signing': console.log('Signing envelope...'); break; case 'sending': console.log(`Sending share ${event.current}/${event.total}...`); break; case 'complete': console.log('Message sent successfully'); break; } } }); // Receive with progress tracking const result = await agent.receive(envelope, { onProgress: async (event) => { if (event.stage === 'reconstructing') { console.log(`Reconstructing from ${event.current} shares...`); } } });
Structured Error Handling
Xlink uses a Result<T, E> pattern with detailed error structures. Every error includes a machine-readable code, human-readable message, actionable hint, and documentation URL.
interface ErrorDetail { code: string; // e.g., 'INVALID_DID' message: string; // Human-readable description hint?: string; // Actionable suggestion field?: string; // Field that caused the error docs?: string; // Documentation URL }
Error Categories
Xlink organizes 45+ error codes across 7 categories, making it easy to handle errors systematically:
| Category | Example Codes | When |
|---|---|---|
| Identity | INVALID_DID, KEYGEN_FAILED | DID validation, key generation, signing |
| Envelope | ENVELOPE_DECRYPTION_FAILED, PARSE_FAILED | Envelope creation, encryption, decryption |
| Transport | SEND_FAILED, NETWORK_ERROR | Network failures, transport adapter errors |
| Registry | DID_NOT_IN_REGISTRY, LOOKUP_FAILED | Trust registry operations |
| Key Agreement | ECDH_FAILED, INVALID_KEY_LENGTH | X25519 ECDH derivation |
| Split-Channel | HMAC_VERIFICATION_FAILED, INSUFFICIENT_SHARES | XorIDA splitting, reconstruction, HMAC |
| Agent | NONCE_REPLAY_DETECTED, SCOPE_DENIED | High-level agent operations, replay prevention |
Fast Onboarding: < 2 Minute Setup
Zero-config service discovery and invite flow enable rapid M2M adoption. Setup time: < 2 minutes (vs 42-67 minutes for API keys).
The 2-Minute Setup Flow
Traditional API key setup requires 42-67 minutes of developer time per integration: account creation, API key generation, secret management setup, documentation reading, SDK installation, configuration, testing, and deployment. Xlink reduces this to under 2 minutes through zero-config service discovery and automatic trust establishment:
// Step 1: Initialize local identity (< 30 sec) $ xlink init --name my-service { "status": "initialized", "did": "did:key:z6MksZP8ChwZYSNgozYq...", "name": "my-service" } // Step 2: Connect to a service (< 90 sec) $ xlink connect payments-service { "status": "connected", "service": "payments-service", "did": "did:key:z6Mkf2rR8...", "endpoint": "https://api.payments.example.com", "elapsed_seconds": 1.3 } // Step 3: Use it immediately const { connect } = require('@private.me/xlink'); const connection = await connect('payments-service'); await connection.value.agent.send({ to: connection.value.did, payload: { action: 'createCharge', amount: 100 }, scope: 'payments' });
Zero-Config Discovery (3-Tier Lookup)
The connect() function accepts service names, domains, or URLs and automatically discovers connection details through a 3-tier lookup system:
| Method | Example | Lookup |
|---|---|---|
| Public Registry | connect('payments-service') |
Query xlink.registry.io for registered service |
| .well-known | connect('api.example.com') |
Fetch https://api.example.com/.well-known/xlink.json |
| Direct URL | connect('https://api.example.com/xlink') |
Use URL directly |
Invite Flow (< 10 sec creation, < 60 sec acceptance)
The invite system enables effortless service-to-service connections. Creating an invite takes < 10 seconds, accepting takes < 60 seconds, and the invite recipient can immediately use the connection.
$ xlink invite billing-service --email billing@example.com
{
"status": "created",
"invite_url": "https://xlink.to/invite/a7Km9x...",
"qr_code": "data:image/svg+xml,...",
"expires_at": "2026-04-19T...",
"message": "Share this link: https://xlink.to/invite/a7Km9x..."
}
When the recipient clicks the invite link, they see a one-click acceptance page with the inviter's service info. Accepting the invite automatically establishes the connection and adds both services to each other's trust registries.
Zero-Downtime Migration (Dual-Mode Adapter)
For existing M2M connections using API keys, Xlink provides a DualModeAdapter that runs Xlink and API key authentication simultaneously. This enables zero-downtime migration with gradual rollout and usage tracking:
const { DualModeAdapter } = require('@private.me/xlink'); // Create dual-mode adapter (tries Xlink first, falls back to API key) const adapter = new DualModeAdapter({ xlink: xlinkAgent, // Optional: add when ready fallback: { type: 'api-key', key: process.env.API_KEY, url: 'https://api.example.com', }, }); // Make calls (automatically tries Xlink → API key) const result = await adapter.call('createCharge', { amount: 100 }); // Track migration progress const metrics = adapter.getMetrics(); console.log(`Xlink usage: ${metrics.xlinkPercentage}%`); // Output: "Xlink usage: 78%" // Remove fallback when 100% migrated adapter.removeFallback();
Comparison: Xlink vs Traditional API Keys
| Aspect | Traditional APIs | Xlink |
|---|---|---|
| Setup Time | 42-67 minutes (account + keys + config + docs + testing) | < 2 minutes (init + connect + use) |
| Secret Management | API keys in env vars, rotation every 90 days | No keys, zero rotation |
| Discovery | Manual documentation reading | Zero-config 3-tier lookup |
| Invite Mechanism | Email API key manually | One-click invite link, < 10 sec creation |
| Acceptance | Manual setup (42-67 min) | One-click acceptance (< 60 sec) |
| Network Effect | Linear (manual outreach) | Each connection enables further invites |
Three Paths to Production
Xlink adoption follows three distinct business paths: greenfield connections (new builds), migration (existing APIs), and enterprise deployment (governance at scale). Each path has its own optimal onboarding flow.
Path 2: Migration — Existing API connections. Run ACI parallel, shift traffic gradually via Xfuse.
Path 3: Enterprise — Large-scale deployment. Configure trust policies, audit trails, and governance.
Path 1: Greenfield (New Connections)
For new M2M connections with no existing API infrastructure, Xlink offers three speed tiers. All three accomplish the same goal — establishing a secure identity-based connection — but at different setup speeds and automation levels.
Speed Tier 1: Zero-Click (15 seconds)
Fastest onboarding for developers trying Xlink for the first time. Share an invite code, paste it into your environment variables, and the SDK auto-discovers the recipient's identity and auto-registers your DID. First send succeeds immediately with zero manual configuration.
// .env file XLINK_INVITE_CODE=XLK-abc123def456 // Your code const { Agent } = require('@private.me/xlink'); // Agent auto-accepts invite and configures trust on first use const agent = Agent.lazy({ name: 'my-service' }); // First send triggers identity generation + auto-registration await agent.send({ to: 'did:key:z6MkPartnerDID...', payload: { action: 'processData', data: { ... } }, scope: 'integration' });
Setup time: ~15 seconds
Best for: First-time developers, rapid prototyping, quick demos
Key benefit: Instant working demo → share invite with colleagues → immediate connection
Speed Tier 2: CLI-Guided (90 seconds)
Interactive CLI command that guides developers through framework-specific setup. Generates boilerplate code for Node.js, Python, Go, or Rust. Validates the connection with a test message before completing.
# Both commands are equivalent (xlink-onboard is an alias) $ npx @private.me/xlink init --invite XLK-abc123def456 # OR $ npx @private.me/xlink xlink-onboard --invite XLK-abc123def456 ? Select your framework: Node.js (Express) ? Project name: my-integration Generated identity: did:key:z6MksZP8ChwZYSNgozYq... Configured trust registry Created src/xlink-client.ts Created .env with connection details Test message sent successfully Ready to integrate. Run: npm start
Setup time: ~90 seconds
Best for: Developers integrating into existing systems, production setup
Key benefit: Production-ready code generated, validated connection before completion
Speed Tier 3: Deploy Button (10 minutes)
One-click infrastructure deployment for teams. Provisions complete production environment with Xlink pre-configured — Docker containers, Nginx reverse proxy, SSL certificates, health checks, and monitoring. Outputs production URLs when complete.
<!-- Add to README.md --> [](https://github.com/private-me/xlink-infra/actions) // Click button → GitHub Actions provisions: // - DigitalOcean Droplet (or AWS EC2 / Google Cloud) // - Docker Compose with Xlink services // - Nginx reverse proxy + Let's Encrypt SSL // - Prometheus monitoring + Grafana dashboards // - Health check endpoints // Outputs after 10 minutes: { "service_url": "https://xlink.your-company.com", "did": "did:key:z6MksZP8ChwZYSNgozYq...", "status": "healthy" }
Setup time: ~10 minutes
Best for: Teams deploying production infrastructure, platform integrations
Key benefit: Complete infrastructure → SSL, monitoring, backups — zero DevOps work
Speed Tier Comparison
| Tier | Setup Time | Automation Level | Output | Best For |
|---|---|---|---|---|
| Zero-Click | 15 seconds | Full auto | Working connection | Demos, prototyping |
| CLI-Guided | 90 seconds | Interactive | Production code | Integration work |
| Deploy Button | 10 minutes | Infrastructure | Live service + monitoring | Team deployments |
Path 2: Migration (Existing APIs)
Already have a working API connection? Migrate safely to Xlink using Xfuse — the threshold identity fusion bridge that runs ACI connections in parallel with your existing API, shifts traffic gradually, and deprecates the API when you're ready.
// Step 1: Deploy ACI alongside existing API (no downtime) const { Xfuse } = require('@private.me/xfuse'); const bridge = await Xfuse.create({ legacy: { apiKey: process.env.API_KEY, endpoint: 'https://api.legacy.com' }, modern: { agent: xlinkAgent } }); // Step 2: Mirror traffic (validate both paths) const result = await bridge.send({ mode: 'mirror', // Send to both, compare results payload: { action: 'transfer', amount: 100 } }); // Step 3: Shift traffic (10% → 50% → 100%) bridge.setTrafficRatio({ aci: 0.5, api: 0.5 }); // Step 4: Deprecate API when ACI proves stable bridge.setTrafficRatio({ aci: 1.0, api: 0.0 });
Migration time: Days to weeks (gradual traffic shift)
Downtime: Zero (parallel deployment)
Rollback: Instant (shift ratio back to API)
Learn more: Xfuse White Paper
Path 3: Enterprise (Governance at Scale)
Large organizations require centralized trust management, audit trails, and policy enforcement across hundreds of services. Enterprise deployment focuses on governance infrastructure rather than individual connection speed.
// Step 1: Configure centralized trust registry const registry = await TrustRegistry.enterprise({ mode: 'centralized', endpoint: 'https://trust.corp.example.com' }); // Step 2: Define org-wide policies const policy = await PolicyEngine.create({ rules: [ { scope: 'finance', require: ['2FA', 'audit-log'] }, { scope: 'public', require: ['rate-limit'] } ] }); // Step 3: Enable audit trail for compliance const audit = await AuditLog.enterprise({ retention: '7-years', // SOX / SEC 17a-4 encryption: 'org-key' }); // All services inherit org config automatically const agent = await Agent.create({ name: 'payment-service', registry, // Shared across org policy, // Enforced centrally audit // Compliance copy to SOC });
Deployment scope: Organization-wide (10s–1000s of services)
Configuration: Once (all services inherit)
Compliance: SOC 2, ISO 27001, FedRAMP, HIPAA-ready
Learn more: Authorization • Audit Logs • Credentials
Choosing Your Path
Your adoption path depends on your starting point:
- Building something new? → Greenfield Path — Start with Zero-Click (15s) for instant demo, upgrade to CLI (90s) for production code, or use Deploy Button (10min) for full infrastructure.
- Already have an API? → Migration Path — Use Xfuse to run ACI parallel, shift traffic gradually (10% → 50% → 100%), deprecate API when stable. Zero downtime.
- Deploying across an organization? → Enterprise Path — Configure trust registry, policies, and audit once. All services inherit org-wide governance automatically.
Cascading Failure Elimination
One expired OAuth token can restart 500 AI agents simultaneously. xLink eliminates tokens, so cascades can't happen.
Critical Failure #1: Secret Sprawl
Before you can leak a key, you have to store it. API keys exist in dozens of locations across every deployment: .env files, CI/CD secrets, developer laptops, Kubernetes manifests, Docker images, Slack messages, documentation, git history. Each copy is an attack surface.
The real cost pattern: One developer commits a .env file to GitHub (happens thousands of times daily). That key was copied to staging, production, 14 microservices, 3 documentation repos, and 2 troubleshooting Slack threads. You now must rotate hundreds of dependent keys, update every deployment, restart every service, and notify every team—because one file got committed.
// Developer workstation .env .env.local .env.production ~/.aws/credentials ~/.config/gcloud/credentials docker-compose.yml // Version control (even after deletion, git history persists) .git/objects/ab/cd1234... ← Key committed 18 months ago // CI/CD platforms GitHub Actions secrets GitLab CI variables Jenkins credentials store CircleCI environment variables // Production infrastructure Kubernetes secrets (base64 encoded, not encrypted) Docker image layers EC2 instance user data Lambda environment variables API Gateway configuration // Communication channels Slack: "Hey try this key: sk_live_..." Email: "API credentials for new service" Confluence: "Setup Instructions" page Notion: "Onboarding Checklist" // ONE LEAK = ROTATE ALL 17 LOCATIONS
// Identity is generated on-device, never leaves const agent = await Agent.quickstart({ name: 'payment-service' // No API key parameter // No secret to copy // No .env file needed // No git history risk }); // Device generates Ed25519 keypair locally // Public key → DID (shared in trust registry) // Private key → OS keychain (never transmitted, never stored in plaintext) // If device compromised → only THAT device affected // No cascade. No hundreds of rotations. No system-wide incident.
The architectural difference:
| APIs | Secrets must be stored → Copied to 17 locations → One git commit = system-wide rotation incident |
| xLink | Identity generated locally → Never transmitted → Nothing to leak because nothing was stored |
Critical Failure #2: Rotation Nightmare
Even if you never leak a key, compliance requirements force you to rotate it. SOC 2, ISO 27001, and PCI-DSS mandate regular API key rotation—typically every 90 days. Every rotation requires downtime, cross-team coordination, and risks breaking production systems.
The quarterly rotation ritual: Your security team sends the "Q2 API Key Rotation" email. Every team must coordinate a maintenance window. You update production secrets, restart services, monitor for breakage, and hope nothing was missed. Four times per year, your entire engineering organization stops building to rotate credentials that shouldn't exist in the first place.
// Week 1: Generate new key, store in secret manager aws secretsmanager create-secret \ --name prod/api-key-v2 \ --secret-string "sk_live_new..." // Week 2: Deploy new key to all services (parallel) kubectl set env deployment/payments API_KEY=sk_live_new... kubectl set env deployment/billing API_KEY=sk_live_new... kubectl set env deployment/invoicing API_KEY=sk_live_new... // ... repeat for 47 more services // Week 3: Coordinate rollout // Can't deploy during business hours (risk of downtime) // Can't deploy Friday (weekend incident risk) // Can't deploy during month-end close (finance freeze) // 2am Sunday maintenance window (engineers on-call) // Week 4: Monitor for breakage // Did we miss a service? Is caching causing old key usage? // Are background jobs still using the old key? // Week 5: Finally revoke old key // (after monitoring period proves new key works) aws secretsmanager delete-secret --secret-id prod/api-key-v1 // Repeat every 90 days. Forever.
// Identity keys are never transmitted, so they can't be intercepted const agent = await Agent.quickstart({ name: 'payment-service' }); // Each message signed with fresh cryptographic proof const result = await agent.send({ to: 'billing-service', action: 'process-payment', payload: { amount, currency } // Ed25519 signature generated for THIS message // Non-reusable (nonce prevents replay) // Verified via public DID (no secret transmitted) }); // Compliance requirement satisfied by architecture: // - No long-lived credentials exist → Nothing to rotate // - Each signature is single-use → Replay impossible // - Private key never leaves device → Interception impossible // SOC 2 auditor: "How often do you rotate API keys?" // You: "We don't have API keys. We use cryptographic identity." // Auditor: "...approved."
Why rotation exists (and why it fails):
Rotation policies assume credentials will leak eventually. The logic: if a key leaked 60 days ago and you rotate every 90 days, the exposure window is limited. But this assumes you detect the leak, which most organizations don't. Verizon's 2025 Data Breach Report found the median time to discover a breach is 28 days—but API key leaks in git history can go undetected for years.
Worse, rotation itself causes incidents. Every rotation is a chance to misconfigure, miss a service, or break a production flow. The very mechanism designed to reduce risk becomes a source of operational instability.
| APIs | Long-lived secrets → Must rotate periodically → Quarterly coordination nightmare + production risk |
| xLink | Per-message signatures → Non-reusable by design → Rotation requirement eliminated architecturally |
Critical Failure #3: Shared Secrets Cascade
1,000 workers. 1 API key. 1 leak = total system compromise.
Most production systems share credentials across fleets of workers, services, or agents. This isn't negligence—it's the only way to operate with traditional API architecture. But when 1,000 workers share one API key, you've created a single point of failure where one compromised worker forces a system-wide shutdown.
The architectural problem: When every worker shares the same API key, you can't isolate failures. You can't identify which worker made which request. You can't enforce per-worker rate limits—all 1,000 workers share the same quota. You can't revoke access to one misbehaving worker without shutting down the entire fleet. And when one worker is compromised, you must rotate the key, which stops all 1,000 workers simultaneously.
// Every worker uses the same API key const workers = await Promise.all( Array.from({ length: 1000 }, (_, i) => createWorker({ id: `worker-${i}`, apiKey: process.env.API_KEY // ️ Same key for all 1,000 }) ) ); // Problems this creates: // Can't identify which worker made which request (no attribution) // Can't rate-limit per-worker (all share same quota) // Can't revoke one worker (must revoke entire fleet) // One compromised worker = rotate key = 1,000 workers offline // No isolation—one misbehaving worker affects entire system
// Each worker generates its own unique cryptographic identity const workers = await Promise.all( Array.from({ length: 1000 }, async (_, i) => { const agent = await Agent.quickstart({ name: `worker-${i}` // Unique Ed25519 keypair per worker // Unique DID per worker // No shared secrets }); return agent; }) ); // Benefits of unique identity: // Cryptographic attribution—know exactly which worker sent each message // Per-worker rate limits—one worker's quota doesn't affect others // Selective revocation—remove one worker from trust registry, others unaffected // Isolated failures—compromised worker only affects itself // No coordination required—each worker operates independently
The cascade effect:
When 1,000 workers share one API key and that key leaks, you don't have a "worker 437 is compromised" problem—you have a "shut down the entire fleet" problem. The architectural design forces a system-wide response to a localized failure.
| Single Point of Failure | APIs: 1 leaked key → 1,000 workers offline → System-wide incident → Emergency rotation → Coordinated restart |
| Isolated Failure | xLink: 1 compromised worker → 1 worker removed from trust registry → 999 workers continue unaffected → No coordination required |
| Attribution | APIs: Impossible (all workers use same credential) — Can't identify which worker made malicious request |
| Attribution | xLink: Cryptographic proof — Every message signed with worker's unique private key, verified via DID |
Real-world example: A financial services company runs 500 payment processing workers, each handling transactions for different customers. All 500 workers share the same Stripe API key. When worker 237's container is compromised through an unrelated vulnerability, the attacker now has access to Stripe credentials that work for the entire fleet.
The company has no choice: rotate the Stripe API key immediately. All 500 workers must be redeployed with the new key. Payment processing stops system-wide during the rotation window. [Illustrative scenario]: The incident response: 6 hours of downtime, $180K in lost revenue, 14 engineers pulled from other work, and a compliance report explaining why one compromised container took down the entire payment infrastructure.
With xLink: Worker 237's identity is removed from the trust registry. That specific worker can no longer authenticate. The other 499 workers continue processing payments without interruption. No emergency rotation. No system-wide coordination. No downtime. The incident is isolated to the compromised component, as it should be.
The contrast:
| APIs | Shared secrets → Reusable credentials → One leak = system-wide → Every vendor ships this bug |
| xLink | Per-message identity → Non-reusable signatures → One failure = isolated → This bug cannot exist in the architecture |
Why This Costs Multi-Million Dollar Incidents
Industry-wide API failures cost $90 billion annually. But that's abstract. Here's what it means for your systems:
When your 4-hour ETL pipeline fails at hour 3 due to token expiry and restarts from zero, that's not a "slight delay"—it's tripled compute cost, missed SLA, and manual intervention at 2am.
Documented Real-World Failures
This isn't hypothetical. Every major platform ships this failure mode:
The Root Cause (Code Every Vendor Ships)
OAuth refresh failures aren't implementation bugs—they're architectural. Bearer tokens create stateful authentication that expires:
// OAuth transport: token baked in at connection time class OAuthTransport { constructor(accessToken) { this.headers = { Authorization: `Bearer ${accessToken}` }; // ️ Token expires, headers don't refresh → 401 → restart } async call(api) { return fetch(api, { headers: this.headers }); // Workflow restarts from zero. Every time. } }
// Fresh signature per message—nothing to expire class IdentityTransport { async call(api, payload) { const signature = this.agent.sign(payload); return fetch(api, { headers: { 'X-Agent-DID': this.agent.did, 'X-Signature': signature // Fresh every call }, body: payload }); // This failure mode cannot exist } }
Enterprise Value: Seven Dimensions
When cascading failures can't happen, operational reality changes across every layer:
| Return Dimension | Enterprise Impact |
|---|---|
| Cost Elimination | No wasted compute on restarted workflows. SharePoint workflow auth failures eliminated. |
| SLA Compliance | Workflows complete on schedule. No missed deadlines from auth restarts. |
| Developer Productivity | No debugging "why did it restart?" No manual intervention for token refresh. |
| AI Agent Scale | Per-agent identity isolates failures. One agent fails ≠ all 500 agents restart. |
| Audit Trail Integrity | Clean logs (no partial workflow entries). SOC 2/ISO 27001 compliance simplified. |
| Multi-Step Reasoning | LangChain 50-step reasoning chains complete. No mid-chain auth interruptions. |
| Capacity Planning | Predictable resource usage. No 3× infrastructure buffer for restart storms. |
Who Ships This Bug? Everyone.
SharePoint workflows fail with AuthenticationException in production environments. Workflows restart from beginning. Manual intervention required. Widely reported across Microsoft Q&A forums.
Microsoft Q&A →"Constant intervention to solve seemingly random failures." Refresh tokens expire after 90 days. With MFA enabled, tokens expire every 14 days. No automatic recovery.
Microsoft Q&A →OAuth expiration kills autonomous workflows mid-execution. 401 authentication_error. No recovery path—requires manual /login. "OAuth credentials silently wiped on failed refresh."
GitHub #12447 →"Every session >1 hour fails." MCP servers use cached expired token, get HTTP 401, give up. Feature request filed: "Use OAuth refresh_token grant to silently refresh."
GitHub #1797 →Lower-Level Failures Force Higher-Level Redos
APIs turn small failures into full workflow resets. One $0.001 auth call can destroy $5 of completed work.
[Real cost example]: A sub-cent auth call failure ($0.0000035, AWS API Gateway pricing) can discard a 10-minute GPU inference job ($0.33-$0.67, H100 GPU at $2-4/hr, 2026 rates).
That's a 94,000× to 191,000× waste ratio.
Expensive work destroyed by cheap failures.
Every workflow builds expensive work on top of cheap API calls. When the cheap call fails, the expensive work disappears. This is the hidden cost pattern that makes cascading failures economically devastating.
// High-level workflow: "Process invoice and send payment" async function processInvoice(invoiceData) { // Step 1: AI parses invoice (5 minutes of expensive inference) const parsed = await aiService.parseInvoice(invoiceData); // Step 2: Validate against accounting rules (database queries) const validated = await accounting.validate(parsed); // Step 3: Generate payment authorization const auth = await payments.authorize(validated); // Step 4: Send payment → OAuth token expires HERE const result = await payments.send(auth); // 401 Unauthorized // RESULT: Entire workflow restarts from Step 1 // Lost work: 5 min AI + validation + auth generation // Redo cost: 100% of all previous work thrown away }
The multiplication math:
- Low-level failure: 1 API call fails (payment.send())
- Local redo: In a typical 4-step workflow (parse → validate → authorize → execute), one failure forces all 4 steps to restart (4× local waste per workflow)
- Fleet cascade: Multiply by 1,000 concurrent agents running the same integration
- Total waste: [Real cost example]: In a 4-step workflow running across 1,000 parallel agents, one auth failure forces 4,000 operations to restart (1 local redo × 4 steps × 1,000 agents). At $0.40/M tokens (GPT-4 equivalent pricing, 2026), that's $1.60 wasted per incident. At 10 incidents/day, $5,840/year in discarded compute. [Calculation assumes 4-step workflow - adjust based on your stack]
Expensive Work Destroyed by Cheap Failures
The economic absurdity of cascading failures: the cheapest operation in your stack destroys work that's hundreds of thousands of times more expensive.
10-minute PDF analysis ($0.50 GPU cost) discarded on upload auth failure ($0.0000035 API call). 140,000× waste ratio. Agent restarts → regenerates → non-deterministic output → manual review required.
Redo cost: 100% + manual review[Illustrative scenario]: 30-second market analysis complete. Execution API auth expires. Retry takes 8 seconds. Market moved 2% during redo ($1,800 opportunity cost on $90K position). Auth failure cost: 20,000× the auth call price.
Redo cost: 100% + $1,800 missed entryMulti-turn LangChain conversation (6 messages deep, $0.24 in API costs). Auth fails on message 7 ($0.0000035). Context lost. 68,000× waste ratio. Agent restarts from zero. User frustration = escalation to human support.
Redo cost: Customer experience destroyed500 lines of code generated ($0.80 in API costs). Storage auth fails ($0.0000035). 228,000× waste ratio. Regeneration produces different output (LLM non-determinism). Developer manually merges two versions.
Redo cost: Non-deterministic outputxLink Eliminates Redo Cascades Architecturally
When an agent fails, that agent restarts. Your $2 GPU job completes. The signature generation happens AFTER the work is done, not before.
- Work completes → signature generated → message sent
- Auth fails → signature invalid → only that send operation retries
- The 10-minute inference result is preserved
No shared state = no cascade = no redo multiplication.
The architectural difference: xLink eliminates workflow resets by reversing the execution order. Work completes FIRST, then gets signed. If the signature fails, only that send operation retries - the completed work is preserved. No shared state = no cascade = no redo.
This isn't better retry logic. This is removing the failure mode from the system.
The Mechanism (One Sentence)
A $0.001 failure can force $5 of work to be repeated.
This happens because authentication happens BEFORE work execution in API architectures. When auth fails, all downstream work must restart.
xLink reverses this: work completes first, then gets signed. Authentication failure affects only the signature, not the work.
// Same workflow, zero redo risk async function processInvoice(invoiceData) { const parsed = await aiService.parseInvoice(invoiceData); const validated = await accounting.validate(parsed); const auth = await payments.authorize(validated); // Fresh cryptographic signature—always works const result = await agent.send({ to: paymentsService, payload: auth }); // No token to expire, no redo possible // Result: 0× waste multiplication, ever }
Why xLink Can't Cascade
| OAuth | Stores tokens → Tokens expire → Expiry cascades |
| xLink | Computes signatures → Nothing stored → Nothing expires |
There's no token to leak. No refresh to fail. No state to corrupt. Cascades require reusable credentials. Signatures aren't reusable.
The Strategic Reality
Without cascading failure elimination, xLink is "better."
With it, xLink is necessary.
You don't replace working systems for 10% improvement. You replace systems when you realize the current architecture is fundamentally broken.
Every vendor rediscovers this bug. Microsoft, GitHub, and other major platforms all ship the same architectural flaw because OAuth's stateful authentication creates reusable credentials that cascade failures across systems.
xLink removes the mechanism. Not better refresh handling. Not smarter retry logic. No authentication state to expire.
You're one leaked OAuth token away from a system-wide collapse. Or you're running systems where authentication state doesn't exist and cascades can't happen.
That's not a feature comparison. That's an architectural choice.
No Granular Revocation
Agent #47 goes rogue. Your only option: revoke the shared key. All 1,000 agents stop.
The Problem: Security vs Availability
You have 1,000 AI agents running in production. They all authenticate using the same API key or OAuth client credentials because that's how shared secrets work — one credential shared across the fleet.
Agent #47 starts behaving suspiciously. Maybe it's compromised. Maybe it's misbehaving. Maybe you just need to rotate it out for maintenance.
You face an impossible choice:
Stop the security threat immediately.
But kill all 1,000 agents.
Complete production outage. Customers can't transact. Revenue stops.
Keep the other 999 agents operational.
But the rogue agent keeps running.
Security breach continues. Compliance violation. Potential data loss.
This isn't a theoretical problem. It's the fundamental architecture of shared secrets.
API keys: One key shared across the entire fleet. Revoking it kills everything that uses it.
OAuth client credentials: One client_id/client_secret pair per application. Revoking the client kills every instance of that application.
Service accounts: One service account shared by 1,000 workers. Disable the account, disable all workers.
Shared secrets force you to choose between security (revoke immediately) and availability (keep the fleet running).
You can't have both. The credential is shared — revoking it affects everyone who holds it.
The xLink Solution: Identity-Based Revocation
With xLink, every agent has its own cryptographic identity. Agent #47 has a unique DID. Agent #48 has a different DID. They don't share credentials.
// Trust registry manages per-agent access await trustRegistry.revoke('did:key:z6Mk...Agent47') // Agent #47 stops immediately // Agents #1-46, #48-1000 keep running // Zero blast radius
| Scenario | API Key / OAuth | xLink |
|---|---|---|
| Revoke 1 compromised agent | Kill entire fleet | Revoke 1 DID only |
| Blast radius | 1,000 agents down | 1 agent down |
| Production impact | Complete outage | 0.1% capacity reduction |
| Time to remediate | Hours (re-deploy all agents with new key) | Milliseconds (registry update) |
| Customer-facing downtime | Yes — revenue loss | No — 999 agents handle load |
Zero Blast Radius
One compromised identity affects exactly one agent.
The trust registry holds per-agent authorization state. Revoking one DID removes that specific agent's access. The registry check happens on every message — the revoked agent's next send or receive fails immediately.
trustRegistry.revoke('did:key:z6Mk...Agent47') executed. Agent #47 access terminated.
Why This Is Impossible with Shared Secrets
Shared secrets are identity-agnostic. The API key doesn't know which agent holds it. When you revoke the key, you revoke access for everyone who has that key.
You can't revoke "just Agent #47's copy of the key." That concept doesn't exist. The key is the same everywhere. Revoking it revokes it everywhere.
With shared secrets, granular revocation is architecturally impossible.
You can't revoke one agent without revoking all agents. The credential isn't agent-specific — it's shared. This isn't a limitation of poor implementation. This is the architecture.
The Business Impact
Shared secrets force you to keep compromised agents running.
In production, you can't afford total outages. So security teams delay revocation. They wait for maintenance windows. They coordinate cross-team deployments to re-key everything at once.
The compromised agent keeps running for hours or days because shutting it down means shutting down everything.
With API keys, you get security OR availability. Not both.
With xLink, you get security AND availability. Revoke the compromised identity. The fleet keeps running.
xLink Makes This Trivial
Granular revocation isn't a feature you configure. It's how identity-based authentication works by default.
// Add agent to allowlist await trustRegistry.add('did:key:z6Mk...Agent47', { scopes: ['payments:read', 'payments:write'] }) // Revoke specific agent await trustRegistry.revoke('did:key:z6Mk...Agent47') // Update scopes for specific agent await trustRegistry.updateScopes('did:key:z6Mk...Agent48', { scopes: ['payments:read'] // remove write access })
Every agent send/receive goes through the trust registry. Revoked DIDs fail immediately. Updated scopes take effect on the next message. No cache invalidation. No eventual consistency. Instant enforcement.
API Keys: Revoke the key → kill the fleet → scramble to re-deploy → customers see downtime.
xLink: Revoke the DID → one agent stops → 999 agents unaffected → customers see nothing.
Granular revocation isn't a feature. It's the absence of shared secrets.
When every agent has a unique identity, revoking one identity affects one agent. This isn't advanced configuration — it's the default behavior of identity-based systems.
You don't choose between security and availability anymore. You get both.
The Problem
Machine-to-machine security today is a patchwork of API keys, OAuth client credentials, mTLS certificates, and API gateways — each with its own rotation schedule, configuration surface, and failure modes.
API keys leak. They end up in logs, git commits, environment variables shared over Slack, and CI pipelines with overly broad access. Rotation means touching every service that holds the key — a manual, error-prone process.
OAuth is complex. Client credentials flow requires token endpoints, scopes, refresh logic, and revocation. Every new service needs a registration, a secret, and a grant configuration.
mTLS certs expire. Certificate lifecycle management is a full-time job. Renewal failures cause outages. CA compromise is a single point of failure for the entire mesh.
Gateways add latency and cost. Centralized API gateways become bottlenecks, introduce single points of failure, and charge per-request fees that scale with traffic.
Critical Failure #6: AI Agent Explosion
2024: 1 developer = 3 environments. 2026: 1 developer = 500 AI agents.
The scale problem isn't theoretical anymore. LangChain workflows, CrewAI teams, AutoGPT instances — modern developers spawn hundreds of AI agents to handle document processing, customer support, code generation, data analysis, and orchestration tasks. Each agent needs API access to payments, databases, third-party services, and internal systems.
The current solution: Share 1 API key across all 500 agents. Because what else can you do?
The Problem — Modern Reality
Credential management doesn't scale to AI agent fleets:
- Shared API keys: All 500 agents use the same credential. One key leaked = entire fleet compromised.
- Quota chaos: All agents hit the same rate limit. Agent #47 runs wild → all 500 agents throttled.
- Zero attribution: Logs show "API_KEY: sk_live_xyz made 10,000 calls." Which agent? No idea.
- Blast radius: One agent misbehaves → revoke the key → all 500 agents stop → production outage.
- No per-agent control: Can't give Agent #12 read-only access while Agent #89 has write access. Everyone gets the same permissions.
Code Comparison
// Every agent uses the same API key const SHARED_API_KEY = process.env.PAYMENTS_API_KEY; // Agent #1 await fetch('https://api.payments.com/transfer', { headers: { 'Authorization': `Bearer ${SHARED_API_KEY}` } }); // Agent #2 await fetch('https://api.payments.com/transfer', { headers: { 'Authorization': `Bearer ${SHARED_API_KEY}` } }); // ... Agent #3 through #499 (all using SHARED_API_KEY) // Agent #500 await fetch('https://api.payments.com/transfer', { headers: { 'Authorization': `Bearer ${SHARED_API_KEY}` } }); // PROBLEM: 500 agents, 1 identity // Rate limit hit? All 500 agents throttled. // Key leaked? All 500 agents compromised. // Bad actor? Can't isolate. Revoke = outage.
// Each agent generates its own identity const agent1 = await Agent.quickstart(); // did:key:z6MkAgent1... const agent2 = await Agent.quickstart(); // did:key:z6MkAgent2... // ... 498 more agents, each with unique DID // Agent #1 sends with its own signature await agent1.send({ to: 'did:key:z6MkPaymentsService...', payload: { action: 'transfer', amount: 100 } }); // Signed by Agent #1's Ed25519 key // Agent #2 sends with ITS OWN signature await agent2.send({ to: 'did:key:z6MkPaymentsService...', payload: { action: 'transfer', amount: 50 } }); // Signed by Agent #2's Ed25519 key // RESULT: 500 agents, 500 identities // Rate limit per-agent (Agent #47 throttled ≠ Agent #48 unaffected) // Revoke Agent #47 → 499 agents keep running // Per-agent attribution in every log entry
The Scale Problem
At enterprise scale, the shared credential model completely breaks down:
| Fleet Size | Shared Credentials (Current) | xLink Per-Agent Identity |
|---|---|---|
| 10 devs × 500 agents | 5,000 agents hitting same quota | 5,000 separate identities, 5,000 separate quotas |
| Rate limiting | 1,000 req/min shared across 5,000 agents | 1,000 req/min per agent (5M req/min total) |
| One agent misbehaves | Entire fleet throttled or blocked | One agent isolated, 4,999 unaffected |
| Audit trail | "API_KEY: sk_live_xyz made 2.4M calls" | "did:key:z6MkAgent47... made 847 calls" |
| Compromised agent | Revoke key → 5,000 agents down | Revoke DID → 1 agent down |
Why This Matters
Per-agent identity enables operations that are impossible with shared credentials:
- Per-agent rate limiting: 1,000 agents = 1,000 independent quotas. Agent #47 hitting rate limits doesn't affect Agent #48.
- Per-agent attribution: Every API call is cryptographically signed by a unique DID. Logs show exactly which agent made which call.
- Per-agent revocation: Bad agent identified → revoke its DID → one agent stops → 999 agents keep working → zero production impact.
- Per-agent scopes: Agent #12 gets read-only access. Agent #89 gets write access. Agent #200 gets admin scopes. Granular control at identity level.
- Unlimited scaling: Add 500 more agents → each gets its own identity → no credential distribution, no shared secrets, no coordination overhead.
Unlimited agents, each with unique identity.
xLink doesn't require you to "manage credentials at scale." It eliminates credential management entirely. Each agent generates its own Ed25519 keypair. No distribution, no rotation, no shared secrets. Identity-based authentication is the only architecture that scales to AI agent fleets.
You're not managing 500 agents with 1 key. You're managing 500 agents with 500 identities.
When every agent has a unique cryptographic identity, the credential management problem disappears. No shared secrets to rotate. No keys to distribute. No blast radius from a single compromised credential. Each agent is independently verifiable, independently revocable, and independently rate-limited.
This isn't a feature you configure. It's the default behavior of identity-based systems.
Critical Failure #5: Compliance Audit Hell
"Which of your 200 agents made this call?" — When a SOC 2 auditor asks this question during your compliance review, you freeze. With shared API keys, you literally cannot answer. Every agent uses the same credential. Your logs show thousands of calls, but zero agent-level attribution. You promise to "improve logging" and hope they accept it.
The audit failure cascade:
// Server logs from API gateway 2024-04-20 14:32:18 | API_KEY: sk_live_abc123 | POST /orders | 201 Created 2024-04-20 14:32:22 | API_KEY: sk_live_abc123 | POST /payments | 200 OK 2024-04-20 14:32:25 | API_KEY: sk_live_abc123 | DELETE /inventory/1847 | 204 No Content // QUESTION FROM AUDITOR: Which agent deleted inventory item 1847? // YOUR ANSWER: "We don't know. All 200 agents share that API key." // AUDITOR'S RESPONSE: "That's a control deficiency. SOC 2 Type II fails."
You scramble to provide a narrative explanation—maybe correlate timestamps with deployment logs, cross-reference with CI/CD pipelines, check Kubernetes pod names. But that's not cryptographic evidence. It's forensic guesswork. And your auditor knows it.
Why this matters for compliance frameworks:
- SOC 2 (Type II): Requires attribution of actions to specific entities. Shared credentials fail CC6.1 (Logical Access Controls) and CC6.6 (Audit Logging). Auditors require cryptographic proof, not log correlation.
- ISO 27001 (A.9.2.1): "User registration and de-registration" mandates unique identifiers per entity. API keys shared across 200 agents violate this control. Surveillance controls (A.12.4.1) require individual accountability.
- HIPAA (§164.312(b)): Audit controls must record "which person or entity accessed" PHI. Shared API keys cannot satisfy this requirement. OCR enforcement actions cite inability to attribute access.
- GDPR (Article 32(1)(d)): "Ability to ensure ongoing confidentiality" requires knowing who accessed what. Shared keys destroy this capability. GDPR Article 5(2) accountability principle demands attributable actions.
The xLink solution: DID-based cryptographic audit trail
Every xLink message is signed by the sender's DID-based signature. The signature proves which agent (by DID) sent the message. No shared credentials. No ambiguity. No forensic correlation required.
// Audit log with cryptographic attribution 2024-04-20 14:32:18 | FROM: did:key:z6MkOrderAgent47Qp... | action: createOrder | orderId: 8472 | SIG: Ed25519(valid) 2024-04-20 14:32:22 | FROM: did:key:z6MkPaymentAgent89x... | action: processPayment | amount: 127.50 | SIG: Ed25519(valid) 2024-04-20 14:32:25 | FROM: did:key:z6MkInventoryAgent12... | action: deleteItem | itemId: 1847 | SIG: Ed25519(valid) // QUESTION FROM AUDITOR: Which agent deleted inventory item 1847? // YOUR ANSWER: "did:key:z6MkInventoryAgent12..., verified via cryptographic signature." // AUDITOR'S RESPONSE: "Perfect. Cryptographic proof of attribution. Control satisfied."
For details on how DID-based signatures provide cryptographic proof and non-repudiation, see Critical Failure #7: Zero Cryptographic Proof.
Retention and immutability:
The enterprise AuditLog module stores signed envelopes with 7-year retention (configurable for SOX, SEC 17a-4, or other regulations). Each envelope includes timestamp, sender DID, recipient DID, scope, and cryptographic signature. Logs are append-only and HMAC-chained to detect tampering. This satisfies regulatory requirements for audit trail immutability and long-term retention.
// Configure audit log for financial services (SEC 17a-4 compliance) const audit = await AuditLog.enterprise({ retention: '7-years', // SOX / SEC 17a-4 / FINRA 4511 encryption: 'org-key', // Encrypt at rest with org key hmacChain: true, // Tamper-evident HMAC chain immutable: true // Append-only, no deletions }); // Every agent action is automatically logged with cryptographic proof const agent = await Agent.create({ name: 'trading-agent-47', audit // Compliance copy to audit log }); // This send() creates an audit entry with: // - Timestamp (ISO 8601 UTC) // - Sender DID (did:key:z6Mk...) // - Recipient DID // - Action payload // - Cryptographic signature (non-repudiable proof) // - HMAC linking to previous entry (tamper-detection) await agent.send({ to: counterpartyDid, payload: { action: 'submitTradeOrder', symbol: 'AAPL', qty: 1000 }, scope: 'trading' });
The contrast that auditors see:
| Property | API Keys (Shared Credentials) | xLink (DID-Based Identity) |
|---|---|---|
| Attribution | No — All agents share one key, logs show key not agent | Yes — Every message signed by unique DID |
| Non-Repudiation | No — Agent can claim "someone else used the shared key" | Yes — Cryptographic signature proves agent identity |
| Audit Evidence | Narrative correlation (timestamps, pod names, CI/CD logs) | Cryptographic proof (signature verification) |
| SOC 2 CC6.1/CC6.6 | Fails — Shared credentials violate logical access controls | Passes — Unique DID per agent |
| ISO 27001 A.9.2.1 | Fails — No unique identifier per entity | Passes — DID is unique identifier |
| HIPAA §164.312(b) | Fails — Cannot determine which agent accessed PHI | Passes — DID proves which agent |
| GDPR Article 32 | Fails — No accountability for access | Passes — Cryptographic signature ensures accountability |
| Tamper Detection | Application-level logging (logs can be edited) | HMAC-chained audit trail (tampering breaks chain) |
| Long-Term Retention | Manual log archival, retention policies enforced by app | Built-in 7-year retention, immutable append-only log |
When the auditor leaves satisfied:
With xLink, compliance audits go from "explain why your logs don't prove attribution" to "here's cryptographic proof of every agent action." SOC 2 Type II, ISO 27001, HIPAA, and GDPR requirements for accountability, non-repudiation, and audit trails are met at the protocol level—not bolted on via application logging. Your auditor sees cryptographic signatures and HMAC-chained immutable logs, and checks the box.
| Property | API Keys | OAuth 2.0 | mTLS | Xlink |
|---|---|---|---|---|
| Initial setup | Minutes | Hours | Days | 5 lines |
| Key rotation | Manual | Token refresh | Cert renewal | Never* |
| E2E encryption | No | No | Transport only | Yes |
| Forward secrecy | No | No | Optional | Auto ECDH |
| Non-repudiation | No | No | No | Ed25519 + ML-DSA-65 |
| Replay prevention | No | Partial | Partial | Nonce store |
| Info-theoretic mode | No | No | No | XorIDA split |
| npm dependencies | Varies | 10-50+ | OS-level | 0 |
* Ed25519 identity keys are permanent. No expiry, no renewal. New identity = new DID. See Limitations for details.
Critical Failure #7: Zero Cryptographic Proof
Bearer tokens: "Trust me, I have the key." Signatures: Mathematical proof.
When your security team investigates a suspicious API call, the forensic trail goes cold immediately. The logs show "Authorization: Bearer abc123" — but that token could have been used by anyone who had access to it. There's no cryptographic proof of who made the call. Just proof that someone had the token.
The problem with bearer credentials:
API keys and OAuth tokens are bearer credentials. If you possess the secret, you are authenticated. There is no cryptographic binding between the credential and the entity using it. This creates fundamental problems:
- Anyone with the token is trusted: If Agent A leaks an API key, Agent B can use it to impersonate Agent A. The server has no way to distinguish them.
- No proof of origin: Logs show "API key sk_123 made this call" — but they don't show which agent used that key. Multiple agents sharing the same key are indistinguishable.
- Forensic guesswork: To determine who made a suspicious call, you correlate timestamps with deployment logs, Kubernetes pod names, and CI/CD pipelines. That's not proof. It's a narrative reconstruction.
- No non-repudiation: An agent can claim "someone else must have used the shared API key." Without cryptographic signatures, you cannot definitively prove they made the call.
Code comparison: bearer token vs cryptographic signature
// HTTP request with bearer credential POST /api/payments HTTP/1.1 Host: payments.example.com Authorization: Bearer sk_live_abc123xyz // Server logs this request 2024-04-20 15:42:18 | API_KEY: sk_live_abc123xyz | POST /payments | amount: 50000 | 200 OK // PROBLEM: Anyone with this token can make this request // - Agent A, Agent B, Agent C all share sk_live_abc123xyz // - Server cannot tell which agent sent this // - No cryptographic proof of sender identity // - Forensics: correlate timestamps, guess from deployment logs
// xLink envelope with Ed25519 signature { "from": "did:key:z6MkPaymentAgent89xQp...", // Sender's DID (public key) "to": "did:key:z6MkPaymentService47...", "payload": { "action": "processPayment", "amount": 50000 }, "nonce": "7f3a9c2b...", // Prevents replay attacks "signature": "8d4e5f..." // Ed25519(from + to + payload + nonce) } // Recipient verifies signature BEFORE processing const isValid = await ed25519.verify( envelope.signature, envelope.from, // Public key from DID envelope.to + envelope.payload + envelope.nonce ); if (!isValid) { throw new Error('Invalid signature - message rejected'); } // PROOF: This message was signed by did:key:z6MkPaymentAgent89xQp... // - Only the holder of the PRIVATE key could create this signature // - Signature mathematically binds the message to the sender's identity // - Cannot be forged (would require breaking Ed25519, computationally infeasible) // - Audit trail: cryptographic verification, not log correlation
The fundamental difference:
| Property | Bearer Token (API Keys, OAuth) | Cryptographic Signature (xLink) |
|---|---|---|
| Proof of Identity | "I have the secret" — possession-based trust | "I signed this with my private key" — mathematical proof |
| Non-Repudiation | No — Agent can claim "someone else used the shared token" | Yes — Signature mathematically proves sender |
| Forensic Evidence | Log correlation (timestamps, pod names, CI/CD) | Cryptographic verification (signature check) |
| Forgery Resistance | No — Anyone with token can impersonate | Yes — Breaking Ed25519 is computationally infeasible |
| Audit Trail Quality | Narrative reconstruction ("we think Agent A did this based on timestamps") | Mathematical certainty ("Agent A's private key signed this message") |
| Shared Credentials Problem | Unsolvable — Multiple agents share the same token | Not applicable — Each agent has unique DID + keypair |
| Attribution Granularity | API key level (shared across many agents) | Agent level (unique DID per agent) |
| Legal Standing | Weak (repudiation possible, "someone else had access") | Strong (non-repudiable signatures, eIDAS-compliant) |
What cryptographic proof means for your systems:
Every xLink message includes an Ed25519 signature that binds the message content to the sender's identity (DID). The signature is computed over the entire envelope: sender DID, recipient DID, payload, nonce, and timestamp. This provides:
- Cryptographic attribution: Logs don't say "API key abc123 made this call." They say "did:key:z6MkAgent47... signed this message with Ed25519, signature verified." That's mathematical proof, not narrative correlation.
- Non-repudiation: The agent cannot later claim "I didn't send that message." Their private key signed it. The signature verification proves it. This satisfies legal and regulatory requirements for non-repudiable audit trails.
- Tamper detection: If any field in the envelope is modified after signing, the signature verification fails. You know immediately that the message was altered in transit or in storage.
- No shared credentials: Each agent has a unique DID and Ed25519 keypair. There is no "shared API key" problem. Every message is cryptographically attributable to exactly one sender.
Why this matters for compliance and legal contexts:
Regulatory frameworks increasingly require non-repudiation—cryptographic proof that a specific entity performed a specific action. Bearer credentials cannot provide this. Shared API keys make it impossible. xLink's Ed25519 signatures satisfy these requirements at the protocol level:
- Financial services (SEC 17a-4, FINRA 4511): Trade orders must have non-repudiable attribution. Ed25519 signatures on xLink envelopes prove which agent submitted each order. No "someone else had the API key" defense.
- Healthcare (HIPAA § 164.312(c)(1)): Integrity controls must detect unauthorized modifications. xLink signatures fail verification if any field is altered, providing tamper-evidence at the message level.
- Government (NIST SP 800-53 AU-10): Non-repudiation control requires binding actions to identities. DID + Ed25519 satisfies this requirement without application-level workarounds.
- European eIDAS Regulation: Advanced electronic signatures require proof of signer identity and message integrity. xLink envelopes meet both criteria—Ed25519 proves the signer, HMAC proves the integrity.
- Legal discovery: When opposing counsel subpoenas your API logs, "we think Agent A did this based on timestamps" won't survive cross-examination. "Agent A's Ed25519 signature is mathematically verified on this message" will.
Two-Way Communication (vs One-Way APIs)
Traditional APIs provide one-way request/response flows. When bidirectional communication is needed, systems cobble together separate mechanisms: the client makes HTTP requests in one direction, and the server sends webhooks or uses SSE/WebSockets in the reverse direction. This creates architectural complexity, separate authentication flows, and webhook delivery failures.
Xlink provides native two-way peer-to-peer communication. Both services are agents with DIDs, public keys, and the ability to send and receive messages. There is no client/server distinction at the protocol level — both parties are peers. This eliminates the need for webhooks, polling, and separate bidirectional channels.
| Aspect | Traditional APIs | Xlink |
|---|---|---|
| Communication Model | Client → Server (one-way) | Peer ↔ Peer (native two-way) |
| Reverse Direction | Webhooks (separate HTTP POST, delivery failures, retries, auth) | Same protocol (send message to peer's DID) |
| Authentication | Two separate flows (API key for requests, webhook secret for callbacks) | Single mechanism (Ed25519 signatures both directions) |
| Delivery Guarantees | Best-effort webhooks, no built-in retry, manual dead-letter queues | Store-and-forward relay with 7-day TTL |
| Firewall Traversal | Webhook receiver must be publicly accessible (or use ngrok tunnels) | Both peers can be behind NAT/firewall (pull messages from relay) |
| Complexity | 2 separate subsystems (API client + webhook server) | 1 agent (send + receive) |
agent.send() and agent.receive() primitives regardless of direction.
// Service A sends request to Service B const request = await serviceA.send({ to: serviceBDid, payload: { action: 'processPayment', amount: 100 }, scope: 'payments', }); // Service B receives request const inbound = await serviceB.receive(request); // Service B sends response back to Service A const response = await serviceB.send({ to: serviceADid, payload: { status: 'success', transactionId: 'tx_123' }, scope: 'payments', }); // Service A receives response (no webhook needed) const result = await serviceA.receive(response);
With APIs, the reverse direction would require Service A to expose a webhook endpoint, implement authentication, handle delivery failures, and manage retries. With Xlink, both directions use the same authenticated message-passing primitives. The relay server handles store-and-forward, delivery guarantees, and NAT traversal automatically.
The Old Way
The New Way
Mathematical Instability
Shared secret authentication doesn't degrade gracefully. It collapses exponentially.
The fundamental problem with API keys, OAuth tokens, and shared secrets is compound probability. When multiple systems depend on a single authentication credential, failures don't add linearly—they multiply catastrophically.
The Collapse Formula
P(failure) = 1 − (1 − p)N
Where p is the per-system failure rate and N is the number of dependent systems. This isn't a linear relationship—it's exponential. As you add more services sharing the same credential, the probability of at least one failure approaches 100% rapidly.
Example: 500 AI Agents Sharing One API Key
- Single agent failure rate:
p = 0.01(1% chance of auth failure per hour) - Fleet size:
N = 500agents - System-wide failure probability:
1 − (1 − 0.01)500 = 0.9934 - Result: 99.34% chance of cascade failure within 1 hour
Failure Curve: Exponential Collapse
The chart below shows how system reliability degrades as you add more services sharing the same credential. Notice the sharp collapse beyond N=50 systems.
Breakpoint Analysis
Critical thresholds where shared-secret systems transition from stable to unstable:
| Number of Systems (N) | Per-System Failure Rate (p) | System-Wide Failure Probability | Zone |
|---|---|---|---|
10 |
0.01 (1%) |
9.56% |
SAFE |
50 |
0.01 (1%) |
39.50% |
WARNING |
100 |
0.01 (1%) |
63.40% |
BREAK |
500 |
0.01 (1%) |
99.34% |
COLLAPSE |
1000 |
0.01 (1%) |
99.996% |
COLLAPSE |
Zone Definitions
SAFE ZONE (< 20% failure)
Low system count. Shared secrets manageable. Cascades rare but possible.
WARNING ZONE (20-50% failure)
Moderate risk. Token refresh failures begin impacting productivity. Manual intervention frequent.
BREAK ZONE (50-90% failure)
High risk. System spends more time recovering than working. Cascades are expected, not exceptional.
COLLAPSE ZONE (> 90% failure)
Critical. System is perpetually down. Shared secret architecture is fundamentally incompatible with scale.
Key Insight: Reliability doesn't degrade linearly. It collapses exponentially. Once you cross into the BREAK zone, the system becomes unrecoverable without architectural change.
Why xLink Eliminates This Problem
xLink uses cryptographic identity instead of shared secrets. Each agent has its own Ed25519 key pair. When Agent #47 fails, it doesn't cascade to the other 499 agents—the failure is isolated.
P(xlink_cascade) = 0
With independent identities, the compound probability formula no longer applies. There's no shared credential to expire, leak, or fail system-wide. Failures remain local, exactly where they occur.
Measured Impact (Reproducible Benchmark):
- OAuth cascade recovery (500 agents):
54,853ms(restart entire fleet) - xLink isolated failure recovery:
91ms(retry one message) - 603× faster recovery — because cascades are mathematically impossible
Real-World Use Cases
Seven scenarios where Xlink replaces traditional API key management with Authenticated Cryptographic Interfaces.
import { Agent, MemoryTrustRegistry, LoopbackTransport } from '@private.me/xlink'; const registry = new MemoryTrustRegistry(); const transport = new LoopbackTransport(); // Create orchestrator + worker agents const orchestrator = await Agent.quickstart({ name: 'Orchestrator', registry, transport, sendScopes: ['task:assign', 'task:cancel'] }); const worker1 = await Agent.quickstart({ name: 'Worker-1', registry, transport, sendScopes: ['task:result'], receiveScopes: ['task:assign'] // Only accepts task assignments }); // Orchestrator dispatches work await orchestrator.send({ to: worker1.did, payload: { taskId: 't-001', action: 'process-pdf' }, scope: 'task:assign' }); // Worker receives and processes const envelope = transport.outbox[0]!; const task = await worker1.receive(envelope); console.log(task.value.payload.action); // 'process-pdf'
Each sensor gets a deterministic identity from a factory-burned seed. Signed telemetry envelopes flow to gateways. No API keys to rotate across 10,000 devices.
Agent.fromSeed() + createSignedEnvelope()AI agents negotiate tasks via encrypted envelopes with scope-based authorization. The orchestrator verifies each agent’s identity before dispatching work.
Agent.create() + agent.send() + scopesServices authenticate via DID instead of shared secrets. ECDH forward secrecy protects inter-service traffic. No certificate authority to manage.
HttpsTransportAdapter + MemoryTrustRegistryPHI travels via split-channel — any single intercepted channel reveals zero patient data. HIPAA compliance by mathematical guarantee, not policy alone.
security: 'high' (auto 2-of-3)Order routing with non-repudiation. Ed25519 signatures provide cryptographic proof of trade instructions. Timestamp validation prevents replay attacks.
Ed25519 non-repudiation + 30s windowSplit-channel with 3-of-5 threshold across classified and unclassified networks. Information-theoretic security exceeds AES-256 — quantum-proof by construction.
security: 'critical' (3-of-5)AI agents purchase ACIs using xLink identity instead of email addresses. Cryptographic signatures prevent impersonation. 60 req/min rate limit (6× faster than email-based).
agent.createEnvelope() + POST /api/purchaseimport { Agent, MemoryTrustRegistry } from '@private.me/xlink'; // Sender: Payment Service // Recipient: Analytics Service const registry = new MemoryTrustRegistry(); // Analytics only accepts 'payments' scope, rejects 'orders' await registry.register( analyticsDid, analyticsPublicKey, 'Analytics Service', ['analytics'], // Send scopes undefined, undefined, undefined, false, ['payments'] // Receive scopes (only accepts payments) ); // This succeeds (both sides accept 'payments') await sender.send({ to: analyticsDid, payload: txData, scope: 'payments' }); // This fails with RECEIVER_SCOPE_DENIED (analytics doesn't accept 'orders') const result = await sender.send({ to: analyticsDid, payload: orderData, scope: 'orders' }); if (!result.ok && result.error === 'RECEIVER_SCOPE_DENIED') { console.error('Recipient does not accept this scope'); }
Bilateral Authorization in Practice
Four industry scenarios where receiver-side scope validation (receiveScopes) provides defense-in-depth security against scope escalation attacks.
IoT Fleet Management
Problem: Sensor devices deployed in the field receive commands from control systems. If a control system is compromised, an attacker could send malicious commands (firmware updates, configuration changes, shutdown commands) that are outside the device's normal operational scope. Traditional APIs have no device-side defense against this.
import { Agent } from '@private.me/xlink'; // Temperature sensor only accepts telemetry commands const sensor = await Agent.create({ registry, transport, receiveScopes: [ 'telemetry:read', 'telemetry:configure' ] }); // Control system sends command const result = await sensor.receive(); // Device rejects 'firmware:update' scope even if sender has it if (!result.ok && result.error.code === 'SCOPE_NOT_ALLOWED') { // Log security event - attacker may have compromised control system logger.security('Rejected out-of-scope command', result.error); }
Security benefit: Even if the control system is fully compromised and has firmware:update scope in the trust registry, the sensor rejects the command because it's not in its receiveScopes allowlist. This prevents command injection attacks and limits blast radius to the device's intended operational scope. The device enforces its own security policy independent of the sender's permissions.
Healthcare Integrations
Problem: Electronic Health Record (EHR) systems integrate with multiple third-party services (billing, lab results, imaging). If a billing system is compromised, an attacker shouldn't be able to access clinical data even if the billing service technically has broad scopes in the trust registry. HIPAA requires minimum necessary access enforcement.
import { Agent } from '@private.me/xlink'; // EHR system only accepts medical data operations const ehr = await Agent.create({ registry, transport, receiveScopes: [ 'patient:read', 'patient:write', 'clinical:labs', 'clinical:imaging' ] }); // Billing service tries to access administrative functions const result = await ehr.receive(); // EHR rejects 'admin:users' or 'billing:export' scopes // even if billing service has those scopes in registry
Security benefit: The EHR enforces its own scope allowlist independent of what permissions external systems claim. This implements HIPAA's minimum necessary standard at the protocol level. If a billing system is compromised, the attacker cannot pivot to administrative functions or export patient data in bulk because the EHR's receiveScopes blocks non-medical operations. Defense-in-depth ensures that both sender permissions AND receiver policy must align.
Financial Services
Problem: Payment processors receive transaction requests from multiple merchant systems. If a merchant's credentials are compromised, an attacker shouldn't be able to execute refunds, void transactions, or access settlement data even if the merchant technically has those scopes. Financial regulations require strict scope isolation to prevent fraud.
import { Agent } from '@private.me/xlink'; // Payment processor only accepts payment operations const processor = await Agent.create({ registry, transport, receiveScopes: [ 'payment:authorize', 'payment:capture' ] }); // Merchant tries to issue refund (requires separate flow) const result = await processor.receive(); // Processor rejects 'payment:refund' or 'settlement:export' // These operations require elevated authentication
Security benefit: The payment processor enforces operation-level isolation. Even if a merchant's credentials are stolen, the attacker cannot execute refunds or void transactions because those scopes are not in the processor's receiveScopes for standard payment flows. Refunds require a separate authentication flow with stricter controls. This prevents the "compromised merchant account leads to fraudulent refunds" attack pattern that costs payment processors millions annually.
Multi-Tenant SaaS
Problem: Multi-tenant SaaS platforms serve multiple organizations on shared infrastructure. If one tenant's credentials are compromised, an attacker shouldn't be able to access cross-tenant administrative functions or data export capabilities even if those scopes exist in the system. Tenant isolation must be enforced at the protocol level, not just application logic.
import { Agent } from '@private.me/xlink'; // Each tenant defines which scopes they accept const tenantService = await Agent.create({ registry, transport, receiveScopes: [ 'tenant:read', 'tenant:write', 'data:query' ] }); // Another tenant tries to access admin or export functions const result = await tenantService.receive(); // Service rejects 'admin:config', 'data:export', 'tenant:list' // These are platform-level scopes, not tenant-level
Security benefit: Each tenant can define a restrictive receiveScopes allowlist that excludes administrative and cross-tenant operations. If a tenant's credentials are compromised, the platform prevents scope escalation to platform-level functions. The receiver (platform) enforces both tenant-level AND platform-level scope boundaries, creating defense-in-depth isolation. This prevents the "compromised tenant escalates to platform admin" attack that has led to multi-tenant breaches at major SaaS providers.
Provisioning Pipeline Authentication
Problem: Automated provisioning systems traditionally store API keys in environment variables to authenticate with email providers, payment processors, and cloud services. If the provisioning service crashes, API keys may be exposed in logs, memory dumps, or error reports. Traditional key rotation (30-90 day cycles) creates operational overhead and security windows.
import { Agent } from '@private.me/xlink'; // Provisioning service authenticates via cryptographic identity const provisioningAgent = await Agent.create({ registry, transport, sendScopes: [ 'provisioning:notify', 'provisioning:deploy' ] }); // Send provisioning notification (no API key stored) const result = await provisioningAgent.send({ to: emailServiceDid, payload: { recipient: customer.email, deployment: connectionId, status: 'active' }, scope: 'provisioning:notify' }); // If service crashes, no credentials are exposed // Attacker gets Ed25519 keypair, not API keys
Traditional approach: Store RESEND_API_KEY in environment variables. If the provisioning service crashes, the API key may be exposed in logs or core dumps. Rotating the key requires updating environment variables across all deployment environments and restarting services.
xLink approach: The provisioning service proves its identity cryptographically using an Ed25519 keypair. No API keys exist to store or rotate. Even if an attacker gains root access to the server, they cannot extract credentials that don't exist. The keypair alone is useless without the registered identity in the trust registry.
Zero-trust automation: The provisioning service authenticates without storing secrets. If the service is compromised, no API keys are leaked—because they don't exist. Attackers get cryptographic proofs, not credentials. Security incident response time reduced from hours (revoke + rotate keys across environments) to seconds (revoke identity in trust registry).
Security benefit: Full Control's provisioning system uses xLink to authenticate with email providers (Resend) and deployment infrastructure. Zero credential storage means zero credential exposure. Traditional API key rotation (30-90 day cycles) is eliminated entirely. If the provisioning service is compromised, attackers cannot impersonate the service to other systems because the cryptographic identity can be instantly revoked in the trust registry. The attack surface is reduced from "steal API key = full access" to "steal keypair = useless without trust registry authorization."
Real-world impact: Full Control's provisioning system handles 10+ customer deployments with zero credential exposure risk. Traditional key rotation cycles are eliminated. Security incident response time reduced from hours (revoke + rotate + redeploy) to seconds (revoke identity). Compliance audits simplified—there are no API keys to audit, rotate, or secure.
Purchasing ACIs with xLink
AI agents can purchase ACIs using xLink M2M authentication instead of email addresses. Cryptographic signatures prevent impersonation, and identity-based rate limits are 6× faster than email-based flows.
Why Use xLink for Purchases
- Privacy: Email optional — fallback generated as
agent-{did_suffix}@private.meif missing - Security: Signed envelopes prevent purchase request impersonation (no stolen API keys)
- Performance: 60 requests/min rate limit vs 10 requests/min for email-based purchases
- M2M-native: No human intervention required for subscription management
Pricing Tiers
All ACIs use standardized tier-based pricing with a 3-month free trial (credit card required upfront). AI agents can purchase any tier programmatically using xLink identity.
Standard pricing: $5/month Basic • $10/month Middle • $15/month Enterprise
Volume discounts: 10-30% off for 5+ ACIs. Additional 10-15% off for annual prepay. Agent-based purchases inherit organization-level discounts automatically via DID-to-organization mapping.
How It Works
- Wrap purchase request in xLink envelope with agent signature
- Server unwraps envelope and verifies cryptographic signature
- Email fallback generated if missing:
agent-{did_suffix}@private.me - Stripe subscription created with metadata linking agent DID
- Response includes connection ID for deployment
Code Examples
Example 1: Basic Purchase with xLink Envelope
import { Agent } from '@private.me/xlink'; // Create agent (or use existing) const agent = await Agent.quickstart({ name: 'purchase-agent' }); // Wrap purchase request in signed envelope const envelope = await agent.createEnvelope({ to: 'did:key:z6Mkp...', // Private.Me server DID payload: { aci: 'xlink', tier: 'basic', // 'basic' | 'middle' | 'enterprise' paymentMethod: 'pm_card_visa' }, scope: 'aci:purchase' }); // Submit to ACI Purchase Endpoint const response = await fetch('https://private.me/api/purchase', { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-Client-Type': 'ai-agent' // Increases rate limit to 60/min }, body: JSON.stringify(envelope) }); const result = await response.json(); console.log(result.xlinkAuthenticated); // true (xLink signature verified) console.log(result.connectionId); // Connection ID for deployment
Example 2: Side-by-Side Comparison
| Feature | Email-Based | xLink-Based |
|---|---|---|
| Authentication | Email address | Cryptographic signature |
| Rate Limit | 10 req/min | 60 req/min |
| Email Required? | ✓ Required | ✓ Optional (auto-generated) |
| Impersonation Risk | Medium (email spoofing) | None (signature verification) |
| Idempotency Cache | 24 hours | 24 hours |
| Response Field | xlinkAuthenticated: false | xlinkAuthenticated: true |
Example 3: Response Handling
// Purchase response structure const result = await response.json(); if (result.xlinkAuthenticated) { // xLink signature verified — higher trust console.log('Purchase authenticated via xLink identity'); console.log(`Connection ID: ${result.connectionId}`); console.log(`Agent DID: ${result.agentDid}`); console.log(`Email (fallback): ${result.email}`); } else { // Email-based purchase (legacy flow) console.log('Purchase using email address'); } // Common fields (both flows) console.log(`Subscription ID: ${result.subscriptionId}`); console.log(`Trial ends: ${result.trialEnd}`); // 3 months from now
Migration Timeline
| Timeline | Status | Details |
|---|---|---|
| Days 0-30 | Optional | Both email and xLink flows work. No breaking changes. |
| Days 30-60 | Encouraged | Documentation updated to show xLink as preferred method. |
| Days 60-90 | Warning | Email-based purchases show deprecation notice. |
| Day 90+ | Required | Email-only requests return XLINK_REQUIRED error. |
Rate Limits & Performance
Include X-Client-Type: ai-agent header to receive the higher 60 req/min rate limit. Without this header, xLink purchases default to 10 req/min (same as email-based).
xLink enables AI agents to purchase, deploy, and manage ACIs without human intervention. The cryptographic signature proves identity—no API keys, no email verification loops, no manual account setup.
Solution Architecture
Six composable modules. Each can be used independently or combined through the high-level Agent ACI.
The One-Time Setup
Five lines of code. No configuration files, no gateway dashboards, no certificate authorities.
import { Agent, MemoryTrustRegistry, HttpsTransportAdapter } from '@private.me/xlink'; const registry = new MemoryTrustRegistry(); const transport = new HttpsTransportAdapter({ baseUrl: 'https://api.example.com' }); const agent = (await Agent.create({ name: 'MyService', registry, transport })).value!; // That's it. agent.did is your identity. agent.send() encrypts + signs + delivers. await agent.send({ to: recipientDid, payload: { action: 'hello' }, scope: 'chat' });
Forward Secrecy
Hybrid post-quantum key agreement: X25519 ECDH + ML-KEM-768 KEM (always-on for v2+ agents). Ephemeral key pairs provide forward secrecy.
The sender generates a fresh ephemeral X25519 key pair per message. The shared secret is derived from the sender's ephemeral private key and the recipient's static X25519 public key. The ephemeral public key is included in the envelope so the receiver can derive the same shared secret.
Compromise of long-term keys does not reveal past messages. Each message uses a unique shared secret derived from a unique ephemeral key pair. Past messages are protected even if both parties' long-term keys are later compromised.
Split-Channel Mode
Information-theoretic security via XorIDA threshold secret sharing. Automatic risk-based activation.
The SDK automatically applies split-channel protection (2-of-3 XorIDA threshold sharing) for high-risk operations: high-value transfers, cross-organization communication, and sensitive scopes. The plaintext is split into N shares (default 3) over GF(2), with a reconstruction threshold of K (default 2). Each share is independently encrypted, signed, and transmitted via separate channels.
// SDK auto-applies split-channel for high-risk operations await agent.send({ to: recipientDid, payload: { amount: 500000, action: 'transfer' }, // High value → auto 2-of-3 scope: 'custody', // Sensitive scope → auto 2-of-3 action: 'execute' // Critical action → auto 2-of-3 }); // Manual override if needed (most users won't use this) await agent.send({ to: recipientDid, payload: { data: 'classified' }, scope: 'secure', security: 'high', // Force 2-of-3 even if policy wouldn't auto-apply });
splitChannel: true and splitChannelConfig flags are still supported but deprecated. New code should use security: 'auto' | 'standard' | 'high' | 'critical' for clearer intent.
Multi-Transport Routing
For true channel separation, provide one transport adapter per share. The SDK routes share[i] to transports[i % transports.length] using modulo arithmetic. With 3 shares and 3 transports, share 0 goes to transport 0, share 1 to transport 1, share 2 to transport 2. With 3 shares and 2 transports, shares route as 0→0, 1→1, 2→0 (wraps around).
const agent = await Agent.create({ name: 'SecureAgent', registry, transport: [ new HttpsTransportAdapter({ baseUrl: 'https://ch1.example.com' }), new HttpsTransportAdapter({ baseUrl: 'https://ch2.example.com' }), new HttpsTransportAdapter({ baseUrl: 'https://ch3.example.com' }), ], }); // Shares route: [0 → ch1, 1 → ch2, 2 → ch3] await agent.send({ to: recipientDid, payload: sensitiveData, scope: 'classified', security: 'high', // Auto 2-of-3 split across 3 transports });
Channel independence is the developer’s responsibility. For maximum security, use different infrastructure providers, different network paths, or different geographic regions for each transport. The SDK warns at runtime if transports.length < totalShares.
| Scenario | Shares | Transports | Routing | Security |
|---|---|---|---|---|
| Ideal | 3 | 3 | 0→0, 1→1, 2→2 | Full channel separation |
| Partial | 3 | 2 | 0→0, 1→1, 2→0 | Two channels (share reuse) |
| Single | 3 | 1 | 0→0, 1→0, 2→0 | No channel separation |
transport parameter accepts either a single XailTransportAdapter or an array of adapters. Passing a single adapter is equivalent to transport: [adapter] — all shares route through it. Use arrays to achieve true multi-path delivery. See distributed messaging patterns for routing architecture guidance.
V3 Protocol (Default for Split-Channel)
When split-channel mode is activated (via automatic policy or explicit security: 'high') without Xchange, the SDK uses V3 envelopes with full post-quantum protection and three independent cryptographic layers:
| Layer | Technology | Purpose | Standard |
|---|---|---|---|
| 1. Payload | XorIDA (GF(2)) | Information-theoretic splitting | Proprietary (patent-protected) |
| 2. Key Exchange | X25519 + ML-KEM-768 | Hybrid PQ session keys | FIPS 203 (always-on) |
| 3. Authentication | Ed25519 + ML-DSA-65 | Dual signature verification | FIPS 204 (opt-in via postQuantumSig: true) |
V3 is the default for split-channel mode. Each share is independently encrypted with AES-256-GCM using a session key derived from hybrid KEM. The sender generates an ephemeral X25519 key pair AND performs ML-KEM-768 encapsulation per message. Both shared secrets combine via HKDF-SHA256. Authentication uses Ed25519 (always) plus ML-DSA-65 (when enabled). V3 provides defense-in-depth: compromise of any single cryptographic primitive does not break the system.
Xchange Mode (Opt-In Performance)
For latency-critical workloads (IoT, high-frequency M2M, real-time agents), Xchange mode trades per-share encryption and KEM for up to 180× faster operation (estimated). Activated explicitly via xchange: true on both agent creation and send.
// Agent opts in to Xchange support const agent = await Agent.create({ name: 'FastAgent', registry, transport, xchange: true, }); // Xchange on send (security policy still applies) await agent.send({ to: recipientDid, payload: sensorReading, scope: 'telemetry', security: 'high', // Split-channel with Xchange speed });
Default split-channel uses V3 with three independent cryptographic layers: XorIDA payload split, hybrid PQ KEM (X25519 + ML-KEM-768), and dual signatures (Ed25519 + ML-DSA-65). Xchange mode is for scenarios where latency matters more than defense-in-depth.
Identity Layers in private.me
Three composable identity layers. Xlink is Layer 1. XID adds ephemeral unlinkability. Xfuse adds threshold convergence for high-assurance scenarios.
The private.me platform provides a three-layer identity architecture. Each layer builds on the previous, offering progressively stronger privacy guarantees and multi-factor assurance. Applications choose the layer that matches their security requirements.
Layer 1: Xlink — Cryptographic Identity
Xlink provides the foundational identity layer. Every agent has a persistent DID (did:key:z6Mk...) backed by Ed25519 signing and X25519 key agreement. This is the identity used for most M2M communication, agent-to-agent messaging, and service authentication.
import { Agent } from '@private.me/xlink'; // Create a persistent agent identity const agent = (await Agent.create({ name: 'MyService', registry, transport })).value!; // DID is stable across sessions console.log(agent.did); // did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH // Same DID on every restart (if keys persisted) const pkcs8 = await agent.exportPKCS8(); const restored = (await Agent.importIdentity({ pkcs8, registry, transport })).value!; console.log(restored.did === agent.did); // true
Layer 2: XID — Ephemeral Unlinkable Identity
XID adds per-verifier ephemeral DIDs derived from a master seed via HKDF. Each relationship sees a different DID. Cross-context tracking becomes impossible. DIDs rotate on a configurable schedule (epoch-based). The master seed is XorIDA-split and never stored in plaintext.
import { EphemeralIdentity } from '@private.me/xid'; // Create ephemeral identity manager (master seed is split-protected) const eph = await EphemeralIdentity.create({ epochDurationMs: 86400000, // 24 hours splitConfig: { totalShares: 3, threshold: 2 } });// Derive ephemeral DID for specific verifier const did1 = await eph.deriveForVerifier('ServiceA'); const did2 = await eph.deriveForVerifier('ServiceB'); console.log(did1 !== did2); // true — unlinkable across contexts // DID rotates on epoch boundary const nextEpoch = await eph.deriveForVerifier('ServiceA', epoch + 1); console.log(nextEpoch !== did1); // true — unlinkable across time
See the XID white paper for full technical details on HKDF derivation schedules, epoch management, and split-protected seed storage.
Layer 3: Xfuse — Threshold Identity Convergence
Xfuse adds K-of-N threshold convergence for high-assurance scenarios. Identity is established by presenting K independent signals (password + biometric + device credential + trusted third party attestation). Signals converge via XorIDA to derive a session-bound DID. No single signal is sufficient.
import { FusionManager } from '@private.me/xfuse'; // Configure threshold identity fusion const fusion = new FusionManager({ threshold: 2, totalSignals: 3, ial: 'IAL2', // NIST 800-63A assurance level }); // Enroll three independent signals await fusion.enrollSignal({ type: 'password', value: hashedPassword }); await fusion.enrollSignal({ type: 'biometric', value: fingerprintTemplate }); await fusion.enrollSignal({ type: 'device', value: tpmAttestation }); // Authenticate with any 2 of 3 signals const result = await fusion.converge([ { type: 'password', value: hashedPassword }, { type: 'biometric', value: currentFingerprint }, ]); // Session-bound DID derived from converged signals console.log(result.did); // did:key:z6Mk... (ephemeral, session-scoped) console.log(result.assuranceLevel); // IAL2
See the Xfuse white paper for threshold convergence algorithms, signal diversity requirements, assurance level mapping (IAL1/IAL2/IAL3), and MPC-verified convergence.
Composability & Layer Selection
Applications select the identity layer at integration time. The three layers are independent but composable. A single codebase can use Layer 1 for internal services, Layer 2 for customer-facing endpoints, and Layer 3 for administrative access.
| Layer | Identity Model | Privacy Guarantee | Use Cases |
|---|---|---|---|
| Layer 1: Xlink | Persistent DID | Encrypted + signed messaging | M2M, agent communication, internal APIs |
| Layer 2: XID | Ephemeral per-verifier DID | Cross-context unlinkability | Customer apps, eIDAS compliance, GDPR |
| Layer 3: Xfuse | K-of-N threshold convergence | Multi-factor high-assurance | Defense, finance, healthcare, gov |
// Internal service — Layer 1 (persistent identity) const internalAgent = await Agent.create({ name: 'InternalService', registry, transport }); // Customer app — Layer 2 (ephemeral unlinkable) const customerEph = await EphemeralIdentity.create({ epochDurationMs: 3600000 }); const customerDid = await customerEph.deriveForVerifier('CustomerPortal'); // Admin access — Layer 3 (threshold convergence) const adminFusion = new FusionManager({ threshold: 3, totalSignals: 4, ial: 'IAL3' }); const adminResult = await adminFusion.converge(signals);
Layer 1 (Xlink) is the foundation. Layer 2 (XID) and Layer 3 (Xfuse) are optional enhancements. Most applications use Layer 1 exclusively. Add Layer 2 when unlinkability is required. Add Layer 3 when regulatory compliance demands multi-factor high-assurance identity.
Identity & Persistence
Ed25519 signing + X25519 key agreement via Web Crypto API. Hybrid post-quantum: ML-KEM-768 for key exchange (always-on), ML-DSA-65 for signatures (opt-in). Multiple persistence strategies for different environments.
DID:key Format
Each agent identity is encoded as a did:key DID string: did:key:z6Mk.... The DID embeds the raw Ed25519 public key with a multicodec prefix (0xed01) and base58btc encoding. Anyone with the DID can verify signatures without a network lookup.
import { Agent } from '@private.me/xlink'; // did:key format (ephemeral, local testing) const agent = await Agent.quickstart({ name: 'Alice' }); console.log(agent.did); // did:key:z6MkrR... // did:web format (production, resolvable) const prodAgent = await Agent.create({ did: 'did:web:example.com:alice', registry, transport }); // DIDs are self-certifying identifiers (decentralized) // No registration server needed - generate locally
Persistence Strategies
Complete Flow
End-to-end: identity creation through message delivery and verification.
Sender Pipeline
1. Agent.create() generates Ed25519 + X25519 keys (+ ML-DSA-65 keys when postQuantumSig: true) and registers with the trust registry.
2. agent.send() resolves the recipient DID from the registry.
3. Hybrid key exchange: X25519 ECDH + ML-KEM-768 KEM, combined via HKDF-SHA256 (always-on for v2+ agents). SHA-256 fallback for v1 peers.
4. Payload encrypted with AES-256-GCM (12-byte IV, fresh per message).
5. Ciphertext signed with Ed25519 (+ ML-DSA-65 dual signature in v3 envelopes).
6. Envelope assembled: v2/v3, sender DID, recipient DID, timestamp, nonce, scope, payload, signature(s), ephemeralPub, kemCiphertext.
7. Transport adapter delivers the envelope.
Receiver Pipeline
8. agent.receive() validates envelope version (v1/v2/v3) and algorithm.
9. Timestamp checked against configurable window (default 30s).
10. Nonce checked against NonceStore — rejects duplicates (replay prevention).
11. Sender DID resolved from trust registry — must be registered and not revoked.
12. Ed25519 signature verified. ML-DSA-65 signature also verified if present (v3 envelopes).
13. Sender's scope validated against the claimed scope in the envelope.
14. Shared key derived via hybrid KEM (X25519 + ML-KEM-768) or SHA-256 fallback for v1.
15. Payload decrypted with AES-256-GCM.
16. JSON parsed and returned as AgentMessage.
Integration Patterns
Five patterns for different deployment contexts.
Express Middleware
import express from 'express'; const app = express(); // Verify incoming agent envelopes app.post('/api/messages', agent.middleware(), (req, res) => { const msg = req.agentMessage; console.log('From:', msg.sender, 'Scope:', msg.scope); res.json({ ok: true }); });
Registry Auth Middleware
import { createRegistryAuthMiddleware } from '@private.me/xlink'; // GET /registry/resolve/:did → public (no auth) // POST /registry/register → requires Bearer token app.use('/registry', createRegistryAuthMiddleware(process.env.REGISTRY_ADMIN_TOKEN));
IoT Composable Pattern
import { generateIdentity, createSignedEnvelope, splitForChannel } from '@private.me/xlink'; const id = (await generateIdentity()).value!; const reading = new TextEncoder().encode(JSON.stringify({ temp: 22.5 })); // Signed-only envelope (no encryption, integrity-only) const envelope = await createSignedEnvelope({ senderDid: id.did, recipientDid: gatewayDid, scope: 'telemetry', plaintext: reading, privateKey: id.privateKey, }); // Or split across channels for redundancy const shares = await splitForChannel(reading, { totalShares: 3, threshold: 2 });
Signed-Only Telemetry
// Gateway accepts both encrypted and signed-only envelopes const msg = await gateway.receive(envelope, { allowCleartext: true }); if (msg.ok) { console.log(msg.value.payload); // works for both modes }
Purchase API Integration
import { Agent } from '@private.me/xlink'; // Create agent (or use existing) const agent = await Agent.quickstart({ name: 'purchase-agent' }); // Wrap purchase request in signed envelope const envelope = await agent.createEnvelope({ to: 'did:key:z6Mkp...', // Private.Me server DID payload: { aci: 'xlink', tier: 'basic', paymentMethod: 'pm_card_visa' }, scope: 'aci:purchase' }); // Submit to ACI Purchase Endpoint const response = await fetch('https://private.me/api/purchase', { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-Client-Type': 'ai-agent' // 60 req/min rate limit }, body: JSON.stringify(envelope) }); const result = await response.json(); console.log(result.xlinkAuthenticated); // true console.log(result.connectionId); // For deployment
xLink Agent API
The xLink Agent API is the primary interface for building M2M applications with xLink. It provides high-level abstractions for identity, discovery, policy enforcement, and audit trails while handling all cryptographic operations automatically.
Quick Start with Agent.quickstart()
The fastest path to a working agent. Agent.quickstart() generates an identity, creates an agent instance, and optionally connects to a service in a single call. Ideal for prototyping, demos, and tutorials.
import { Agent } from '@private.me/xlink'; // Create agent with auto-generated identity const result = await Agent.quickstart({ name: 'my-service' }); if (!result.ok) throw result.error; const agent = result.value; // Use immediately await agent.send({ to: 'did:key:z6Mk...', payload: { action: 'processData', data: [...] }, scope: 'integration' });
Agent.quickstart() is production-ready. It uses the same identity generation and cryptographic primitives as manual setup. The only difference is convenience — identity generation happens automatically instead of explicitly.
Creating Agents (Three Patterns)
Beyond quickstart, the xLink Agent API supports three creation patterns for different deployment contexts: explicit identity, lazy initialization, and invite-based onboarding.
Pattern 1: Explicit Identity (Full Control)
import { generateIdentity, Agent } from '@private.me/xlink'; // Generate identity first const idResult = await generateIdentity(); if (!idResult.ok) throw idResult.error; const identity = idResult.value; // Create agent with explicit identity const agent = new Agent({ name: 'payments-service', identity: identity, trustRegistry: myRegistry, transport: myTransport }); // Identity can be persisted to disk/env for reuse process.env.AGENT_DID = identity.did; process.env.AGENT_PRIVATE_KEY = Buffer.from(identity.privateKey).toString('base64');
Pattern 2: Lazy Initialization (Auto-Generate)
import { Agent } from '@private.me/xlink'; // Create lazy agent (no identity yet) const agent = Agent.lazy({ name: 'my-service' }); // Identity generated automatically on first send await agent.send({ to: recipientDid, payload: {...}, scope: 'data' });
Pattern 3: Invite-Based (Environment Variable)
// .env file XLINK_INVITE_CODE=XLK-abc123def456 // Code const agent = Agent.lazy({ name: 'my-service' }); // SDK auto-accepts invite and configures trust on first send await agent.send({ to: partnerDid, payload: {...}, scope: 'integration' });
Tool Discovery with agent.discover()
Agents can advertise capabilities (tools/actions they support) and discover other agents' capabilities. This enables dynamic service composition and reduces manual configuration.
// Service advertises capabilities const agent = new Agent({ name: 'payments-service', identity: identity, capabilities: [ { name: 'createCharge', scopes: ['payments'] }, { name: 'refundCharge', scopes: ['payments'] }, { name: 'getBalance', scopes: ['payments:read'] } ] }); // Other agents discover capabilities const disco = await agent.discover('did:key:z6MkPayments...'); if (disco.ok) { console.log(disco.value.capabilities); // Output: ['createCharge', 'refundCharge', 'getBalance'] // Check if service supports specific capability const canRefund = disco.value.capabilities.includes('refundCharge'); }
Per-Agent Policy Enforcement
Each agent instance can have its own policy configuration: allowed scopes, rate limits, message size limits, and trust rules. Policies are enforced at the agent level before messages reach the network.
import { Agent, MemoryTrustRegistry } from '@private.me/xlink'; const registry = new MemoryTrustRegistry(); // Register recipient with allowed scopes await registry.register( recipientDid, recipientPublicKey, 'Analytics Service', ['analytics'], // Can send analytics data undefined, undefined, undefined, false, ['payments', 'orders'] // Can receive payments + orders ONLY ); const agent = new Agent({ name: 'payments-service', identity: myIdentity, trustRegistry: registry, policy: { maxMessageSize: 1024 * 1024, // 1 MB limit rateLimits: { maxPerMinute: 100, maxPerHour: 5000 }, allowedScopes: ['payments', 'orders'], requireMutualTrust: true // Reject messages from unregistered DIDs } }); // Policy enforced on send const result = await agent.send({ to: recipientDid, payload: { action: 'processOrder', ... }, scope: 'payments' // OK: scope allowed by recipient }); // This fails: scope not in recipient's receive list const blocked = await agent.send({ to: recipientDid, payload: {...}, scope: 'admin' // ERROR: ScopeViolation });
Audit Receipts (Proof of Delivery)
Every sent message can optionally return an audit receipt: a signed acknowledgment from the recipient proving they received and processed the message. Receipts include timestamps, message hashes, and processing status.
const result = await agent.send({ to: recipientDid, payload: { action: 'createCharge', amount: 10000 }, scope: 'payments', requestReceipt: true // Request signed receipt }); if (result.ok && result.value.receipt) { const receipt = result.value.receipt; console.log('Receipt from:', receipt.signer); console.log('Message hash:', receipt.messageHash); console.log('Received at:', new Date(receipt.timestamp)); console.log('Processing status:', receipt.status); // Verify receipt signature (automatic, but can verify again) const valid = await agent.verifyReceipt(receipt); console.log('Receipt valid:', valid.ok); // Store receipt for compliance/audit await auditLog.store(receipt); }
Agent API Reference (Core Methods)
When to Use the Agent API vs Lower-Level APIs
The Agent API handles 95% of use cases. For IoT, embedded systems, and custom protocols, see Integration Patterns for lower-level APIs like createSignedEnvelope() and splitForChannel().
Security Properties
Seven layers of defense. Each independently verifiable.
| Property | Mechanism | Guarantee |
|---|---|---|
| Confidentiality | AES-256-GCM per message | Payload encrypted in transit and at rest |
| Authentication | Ed25519 + ML-DSA-65 (dual) | Sender identity verified on every envelope (PQ-safe with opt-in) |
| Integrity | Encrypt-then-sign | Any modification fails verification |
| Non-repudiation | Ed25519 + ML-DSA-65 dual signature | Sender cannot deny sending (quantum-safe with opt-in) |
| Forward secrecy | X25519 + ML-KEM-768 hybrid | Post-quantum forward secrecy (always-on) |
| Replay prevention | Nonce store + timestamp | Duplicate envelopes rejected |
| Info-theoretic | XorIDA split-channel | K-1 shares reveal zero bits |
import { MemoryTrustRegistry } from '@private.me/xlink'; const registry = new MemoryTrustRegistry(); // Register sender with send scopes await registry.register( senderDid, senderPublicKey, 'Payment Service', ['payments', 'orders'], // Send scopes undefined, undefined, undefined, false, ['payments', 'receipts'], // Receive scopes (accepts only these) ); // Register receiver with receive scopes await registry.register( receiverDid, receiverPublicKey, 'Analytics Service', ['analytics'], // Send scopes undefined, undefined, undefined, false, ['payments'], // Receive scopes (accepts only payments data) );
import { Agent } from '@private.me/xlink'; // Standard mode: AES-256-GCM + Ed25519 + ML-DSA-65 await agent.send({ to: recipientDID, payload: data, scope: 'regular', security: 'standard' // Default }); // Forward Secrecy: + X25519 ECDH + ML-KEM-768 await agent.send({ to: recipientDID, payload: data, scope: 'sensitive', security: 'high' // When both parties publish X25519 keys }); // Split-Channel: + XorIDA (information-theoretic) await agent.send({ to: recipientDID, payload: highValueData, scope: 'financial', security: 'critical' // Shards via threshold sharing (2-of-3) });
Supply Chain Security
Zero npm runtime dependencies. The SDK depends only on workspace packages (@private.me/shared, @private.me/crypto) which are part of the same monorepo. No third-party code executes at runtime. This eliminates the entire class of supply chain attacks from compromised npm packages.
Comparison to Alternatives
| Property | Xlink | TLS | Signal Protocol |
|---|---|---|---|
| E2E encryption | Yes | Transport only | Yes |
| Forward secrecy | Yes (Hybrid PQ KEM) | Yes (DHE) | Yes (Ratchet) |
| Info-theoretic | Yes (XorIDA) | No | No |
| Non-repudiation | Ed25519 + ML-DSA-65 | No | No |
| Replay prevention | Nonce store | Seq numbers | Chain keys |
| Zero npm deps | Yes | N/A | No |
| M2M focused | Yes | General | Person-to-person |
Benchmarks
Performance characteristics measured on Node.js 22, Apple M2. (Estimated results from internal testing)
| Operation | Time | Notes |
|---|---|---|
| Ed25519 keygen | <1ms | Web Crypto API native |
| X25519 keygen | <1ms | Web Crypto API native |
| ECDH key agreement | <1ms | Ephemeral + derive |
| AES-256-GCM encrypt (1KB) | <0.5ms | Hardware-accelerated |
| Ed25519 sign | <0.5ms | Covers ciphertext bytes |
| ML-KEM-768 encapsulate | ~2.7ms | FIPS 203 — @noble/post-quantum |
| ML-KEM-768 decapsulate | ~2.9ms | FIPS 203 — @noble/post-quantum |
| ML-DSA-65 sign | ~10ms | FIPS 204 — opt-in dual signature |
| ML-DSA-65 verify | ~10ms | FIPS 204 — opt-in dual signature |
| Full send pipeline (Xchange) | ~1ms | Random key + encrypt + XorIDA split + sign |
| Full receive pipeline (Xchange) | ~1ms | Verify + reconstruct + HMAC + decrypt |
| Full send pipeline (split-channel) | ~5ms | KEM + per-share AES-GCM + XorIDA split + Ed25519 |
| Full receive pipeline (split-channel) | ~4ms | Verify + KEM decaps + per-share decrypt + reconstruct |
| Full roundtrip (split-channel + ML-DSA-65) | ~29ms | Three layers + dual PQ signatures |
| XorIDA split (1MB, 3 shares) | ~52ms | GF(2) — 15x faster than Shamir's at 1MB |
| XorIDA reconstruct (1MB) | ~33ms | Including HMAC verification |
| Nonce check (memory) | <0.1ms | Map lookup + TTL check |
*Performance figures are estimates from internal testing. Actual performance varies by hardware, payload size, and configuration. Developers should benchmark their specific use case.
XorIDA vs AES-256-GCM — API Payload Performance
Typical ACI traffic — auth tokens, JSON responses, webhooks, chat messages — is under 1 KB. At these sizes, XorIDA’s simple XOR-only arithmetic completes a full split-and-reconstruct roundtrip up to 2–11× faster (estimated) than AES-256-GCM can encrypt-and-decrypt, while providing strictly stronger (information-theoretic) security.
256-Byte Roundtrip — Visual Comparison
Shorter bar = faster. 256B payload, 2,000 iterations, median roundtrip time.
Full Comparison — API Payload Sizes
| Payload | XorIDA 2-of-2 | XorIDA 2-of-3 | AES-256-GCM | Ratio (2-of-2) | Real-World Example |
|---|---|---|---|---|---|
| 64B | 14µs | 17µs | 160µs | 11.4× faster | IoT sensor reading, heartbeat |
| 128B | 23µs | 26µs | 160µs | 7.0× faster | Auth token, session ticket |
| 256B | 35µs | 41µs | 122µs | 3.5× faster | Chat message, SMS-length payload |
| 512B | 34µs | 39µs | 138µs | 4.0× faster | Webhook, API key exchange |
| 1KB | 58µs | 107µs | 140µs | 2.4× faster | REST API JSON response |
| 2KB | 211µs | 262µs | 142µs | 1.5× slower | GraphQL response |
| 4KB | 340µs | 313µs | 134µs | 2.5× slower | Large API response |
| 8KB | 644µs | 883µs | 227µs | 2.8× slower | Crossover zone |
Node.js 22 • 2,000 iterations per size • Median roundtrip (split+reconstruct / encrypt+decrypt)
*Performance figures are estimates from internal testing. Actual performance varies by hardware, payload size, and configuration. Developers should benchmark their specific use case.
Xlink vs Competitors — API Crypto Overhead
Full send + receive roundtrip measured end-to-end with real crypto operations. Competitor estimates use our measured primitive timings (same algorithms, same JS runtime) as reference. All columns use Xchange mode (opt-in performance path) for an apples-to-apples comparison; Xlink’s default split-channel adds KEM + per-share encryption (~9ms roundtrip) for three independent security layers.
256-Byte API Payload — Full Roundtrip
Shorter bar = faster. 256B payload, 200 iterations, median roundtrip time.
Full Comparison — All API Payload Sizes
| Payload | Xlink (Xchange) | Tuta | Signal | Apple PQ3 | Use Case |
|---|---|---|---|---|---|
| 64B | 2.0ms | 4.2ms | 5.0ms | 5.0ms | Auth token, session ID |
| 128B | 1.8ms | 4.2ms | 5.0ms | 5.0ms | Webhook notification |
| 256B | 1.9ms | 4.2ms | 5.0ms | 5.0ms | Small JSON response |
| 512B | 1.8ms | 4.2ms | 5.0ms | 5.0ms | API request body |
| 1KB | 1.9ms | 4.2ms | 5.0ms | 5.0ms | Large JSON / config |
| 2KB | 1.9ms | 4.2ms | 5.0ms | 5.0ms | Batch API response |
| 4KB | 2.5ms | 4.2ms | 5.0ms | 5.0ms | Document payload |
| 8KB | 3.7ms | 4.3ms | 5.1ms | 5.1ms | Rich API response |
Node.js 22 • 200 iterations per size • Median send+receive roundtrip • 2-of-3 split
Crypto Overhead as % of Total API Latency (256B)
| Network Latency | Xlink (Xchange) | Tuta | Signal |
|---|---|---|---|
| Local (1ms) | 65.9% | 80.8% | 83.3% |
| Regional (10ms) | 16.2% | 29.6% | 33.3% |
| Cross-region (50ms) | 3.7% | 7.7% | 9.1% |
| Global (150ms) | 1.3% | 2.7% | 3.2% |
Xchange mode shown. Default split-channel overhead is higher (~9ms roundtrip). At 10ms+ latency, even split-channel overhead is modest. *Performance figures are estimates from internal testing. Actual performance varies by hardware, payload size, and configuration. Developers should benchmark their specific use case.
Security vs Performance
Ranked by roundtrip speed at 256B. Xlink appears twice: Xchange (opt-in performance mode, single information-theoretic layer) and split-channel default (three independent layers including hybrid PQ KEM).
| # | System | Roundtrip | Security Model | Channels | PQ Protection |
|---|---|---|---|---|---|
| 1 | Xlink (Xchange) | 1.9ms | Information-theoretic | k-of-n split | ✓ Unconditional |
| 2 | Tuta | 4.2ms | Computational | Single | ✓ Kyber KEM only |
| 3 | Signal | 5.0ms | Hybrid computational | Single | ✓ Kyber KEM only |
| 4 | Apple PQ3 | 5.0ms | Hybrid computational | Single | ✓ Kyber KEM only |
| 5 | Xlink (split-channel) | ~9ms | IT + computational (3 layers) | k-of-n split | ✓ KEM + opt-in ML-DSA |
Real-World Benchmark Results
Performance claims validated through production testing on real infrastructure. All results from internal testing on representative workloads.
1. Authentication Overhead
Ed25519 signature verification measured across 10,000 authentications in production:
| Metric | Result | Validates Claim |
|---|---|---|
| Median auth time | 0.36ms | <1ms signature verification |
| P95 auth time | 0.49ms | Sub-millisecond at scale |
| P99 auth time | 0.73ms | Consistent performance |
10,000 authentications • Production workload • Web Crypto API Ed25519
2. Cascading Failure Prevention
500 AI agents restarting simultaneously. OAuth token refresh vs. Xlink identity-based authentication:
| System | Time to Operational | Success Rate | Speedup |
|---|---|---|---|
| Xlink | 91ms | 100% | 603× faster |
| OAuth | 54.8s | 97.4% (13 timeouts) | — |
500 concurrent agents • Simulated cascade event • OAuth provider rate-limited at 100 req/sec
3. Restart Multiplication Effect
Multi-agent workflow architecture: Orchestrator → 5 sub-agents × 6 tool calls each. OAuth tokens expire every 60 minutes, forcing complete workflow restarts. Measured auth event multiplication:
| Workflow Duration | OAuth Auth Events | Xlink Auth Events | Event Multiplication |
|---|---|---|---|
| 10 minutes | 310 | 310 | 1.0× (no token expiry yet) |
| 2 hours | 9,300 | 3,720 | 2.5× (2 restarts) |
| 5 hours | 37,200 | 9,300 | 4.0× (5 restarts) |
Multi-agent orchestrator • 60-minute OAuth token lifetime • Each restart replays all prior auth events
4. End-to-End Latency (Multi-Hop Workflow)
Real-world agent workflow: 11 authentications across microservices. Measured total latency from initial request to final response:
| Auth System | E2E Latency | Auth Overhead | Improvement |
|---|---|---|---|
| Xlink | 1,110ms | 4.0ms (11 × 0.36ms) | 73.4% faster |
| OAuth (JWT validation) | 4,173ms | 3,067ms (network + validation) | — |
11-hop workflow • Regional network (10ms avg latency) • 200 iterations • Median E2E time
Summary: Claims vs. Reality
| White Paper Claim | Empirical Result | Status |
|---|---|---|
| <1ms authentication | 0.36ms median, 0.49ms P95 | Validated |
| Eliminates cascading failures | 603× speedup, 100% success vs 97.4% | Validated |
| 3–5× restart multiplication | 2.5–4.0× multiplication (2-5hr workflows) | Validated |
| 60–90% faster (multi-hop workflows) | 73.4% faster (11-auth workflow) | Validated |
*All results from internal testing on representative production workloads. Actual performance varies by infrastructure, network conditions, and workload characteristics. These results validate architectural claims but are not guarantees for all deployments.
Honest Limitations
Eight known limitations documented transparently. No product is perfect — here is what Xlink does not do.
| Limitation | Impact | Mitigation |
|---|---|---|
| Cleartext headers | Sender/recipient DIDs, scope, timestamp visible to network observers. Traffic analysis possible. | Payload is encrypted. Use TLS for transport-level protection of headers. |
| No push revocation | Revoked agents' in-flight messages may be processed before receiver re-checks registry. | Keep timestamp windows short (default 30s). |
| No key rotation | Ed25519 identity keys are permanent. No built-in rotation protocol. | Create new identity and re-register. Old DID becomes unused. |
| SHA-256 fallback | When ECDH unavailable, shared key is deterministic. Not forward-secret. | Ensure both parties publish X25519 keys for automatic ECDH upgrade. |
| Ephemeral nonce store | MemoryNonceStore clears on process restart. Replays within timestamp window succeed. | Use RedisNonceStore for production deployments. |
| Clock dependency | Timestamp validation assumes synchronized clocks. Large skew causes false rejections. | Use NTP. Increase timestampWindowMs for high-latency networks. |
| No payload size limits | SDK does not enforce maximum payload size. Large payloads can exhaust memory. | Validate payload size at application layer before calling send(). |
| Registry trust | Compromised registry can substitute public keys or modify scopes. | Use createRegistryAuthMiddleware() with bearer token auth on writes. |
Post-Quantum Security
Xlink is end-to-end post-quantum. XorIDA payload splitting is information-theoretically quantum-safe. Key exchange uses hybrid X25519 + ML-KEM-768 (always-on, FIPS 203). Signatures use dual Ed25519 + ML-DSA-65 (opt-in via postQuantumSig: true, FIPS 204). All three cryptographic layers — payload, key exchange, and authentication — have quantum-safe implementations deployed.
Two Security Layers
The Xlink architecture has two distinct cryptographic layers with different quantum profiles:
| Layer | Current | Quantum Status | Upgrade Path |
|---|---|---|---|
| Payload (XorIDA) | GF(2) threshold sharing | Quantum-safe now | No change needed |
| Symmetric encryption | AES-256-GCM | Quantum-safe now | No change needed |
| Key establishment | X25519 + ML-KEM-768 (hybrid) | Hybrid PQ — Phase 1 live | Phase 1 deployed (v2 envelopes) |
| Identity / signatures | Ed25519 + ML-DSA-65 (dual) | Hybrid PQ — Phase 2 deployed (opt-in) | Phase 2 deployed (v3 envelopes) |
Migration Strategy
The upgrade follows a three-phase hybrid-first approach. Each phase maintains backward compatibility with the previous one.
Algorithm Profile
| Function | Algorithm | Standard | Key / Sig Size |
|---|---|---|---|
| KEM (Phase 1+) | ML-KEM-768 | FIPS 203 | 1,184 B pub / 1,088 B ct |
| Hybrid KEM | X25519 + ML-KEM-768 | IETF draft | HKDF over both shared secrets |
| Signature (Phase 2, deployed) | ML-DSA-65 | FIPS 204 | 1,952 B pub / 3,309 B sig |
| Symmetric | AES-256-GCM | NIST SP 800-38D | Unchanged |
| Payload splitting | XorIDA GF(2) | Proprietary | Unchanged |
Latency Impact
Post-quantum operations add minimal computational overhead. The dominant latency remains network I/O, not cryptography.
| Operation | Current (Classical) | Hybrid (Phase 1) | Delta |
|---|---|---|---|
| Key exchange | <0.1ms (X25519) | ~0.3ms (X25519 + ML-KEM) | +0.2ms |
| Sign | <0.1ms (Ed25519) | ~1.1ms (Ed25519 + ML-DSA) | +1.0ms |
| Verify | <0.2ms (Ed25519) | ~0.7ms (Ed25519 + ML-DSA) | +0.5ms |
| Full handshake | ~0.4ms | ~2.1ms | +1.7ms |
| Envelope overhead | ~128 bytes | ~6,340 bytes | +6.1 KB |
Envelope Version
The envelope format uses a v2 version tag signaling hybrid PQ support. Agents negotiate capabilities automatically:
- v1 agents communicate using classical crypto (X25519-only ECDH)
- v2 agents communicate using hybrid PQ + classical crypto (X25519 + ML-KEM-768)
- v2 agents automatically fall back to v1 when communicating with v1 peers
- v3 agents add dual signatures (Ed25519 + ML-DSA-65) — deployed, opt-in via
postQuantumSig: true
postQuantumSig: true).
Protocol Security Stack
The Xlink protocol secures messages at three independent cryptographic layers. Each layer addresses a different threat surface. Together, they provide full-stack quantum safety with no single point of cryptographic failure.
Three-Layer Architecture
| Layer | Function | Algorithm | Standard | Quantum Status |
|---|---|---|---|---|
| Payload | Message confidentiality | XorIDA threshold sharing over GF(2) | Proprietary | Immune |
| Key Exchange | Session key establishment | X25519 + ML-KEM-768 (hybrid) | FIPS 203 | Quantum-safe |
| Authentication | Identity verification & integrity | Ed25519 + ML-DSA-65 (dual) | FIPS 204 | Quantum-safe |
Layer 1 — Payload (Information-Theoretic)
XorIDA splits the plaintext into threshold shares using XOR operations over GF(2). Each share is individually indistinguishable from random noise. No computation — classical or quantum — can extract information from fewer than k shares. This is not computational hardness; it is a mathematical proof. The payload layer is immune to harvest-now-decrypt-later attacks because there is no key to break, no structure to exploit, and no algorithm that reduces the problem.
Layer 2 — Key Exchange (Hybrid Post-Quantum KEM)
Session keys are established using a hybrid key encapsulation mechanism: classical X25519 ECDH combined with ML-KEM-768 (FIPS 203, formerly CRYSTALS-Kyber). Both shared secrets are combined via HKDF-SHA256 to derive the session key. The session key is secure as long as either X25519 or ML-KEM-768 remains unbroken — defense in depth. Hybrid KEM is live in v2 envelopes (Phase 1, deployed).
Layer 3 — Authentication (Dual Signatures)
Message authentication and sender identity use dual signatures: classical Ed25519 plus post-quantum ML-DSA-65 (FIPS 204). Both signatures must verify for the message to be accepted. ML-DSA-65 signatures are opt-in via postQuantumSig: true in the agent configuration. When enabled, v3 envelopes carry both signatures. Classical-only agents continue to work with v1/v2 envelopes.
Envelope Version Progression
- v1 — Classical only. X25519 ECDH key exchange, Ed25519 signatures.
- v2 — Hybrid PQ KEM. X25519 + ML-KEM-768 key exchange, Ed25519 signatures. Backward-compatible with v1 peers.
- v3 — Full PQ. Hybrid KEM + dual signatures (Ed25519 + ML-DSA-65). Backward-compatible with v1/v2 peers. Default for split-channel.
- Xchange — Opt-in performance mode. XorIDA key transport replaces KEM. Single security layer (information-theoretic) + Ed25519 authentication. up to 180× faster than V3 split-channel (estimated). Activated explicitly via
xchange: true.
V1, V2, V3 are the version progression. Xchange is a separate opt-in mode, not a version. Agents negotiate capabilities automatically. A v3 agent communicating with a v1 peer falls back to v1 envelope format. No developer action required — the protocol handles version negotiation internally.
postQuantumSig: true, all three cryptographic layers are quantum-safe. The payload is information-theoretically immune. Key exchange uses hybrid PQ KEM (FIPS 203). Authentication uses dual PQ signatures (FIPS 204). All 68 ACIs inherit this protection by updating one dependency.
Enterprise CLI
Self-hosted identity server. Docker-ready. Air-gapped capable. Port 3300. 73 tests. Part of the Enterprise CLI Suite.
@private.me/xlink-cli provides a production-grade identity management server with full REST API, agent lifecycle operations, trust registry management, and envelope processing. Integrates the complete @private.me/xlink for enterprise M2M deployments.
CLI Commands
# Start the Xlink identity server xlink serve --port 3300 → HTTP server on :3300 with agent management + trust registry # Create a new agent identity xlink create --name "MyService" → Generates Ed25519 + X25519 keypairs, returns DID # Send an encrypted envelope xlink send --from "did:key:z6Mk..." --to "did:key:z6Mk..." --payload '{"action":"hello"}' → Creates v3 envelope, encrypts, signs, delivers # Receive and decrypt an envelope xlink receive --agent "did:key:z6Mk..." --envelope "base64-envelope" → Verifies signature, decrypts, validates nonce, returns payload # Register a DID in trust registry xlink register --did "did:key:z6Mk..." --name "Production API" --scope "api" → Adds to trust registry with scope-based permissions # Revoke a compromised agent xlink revoke --did "did:key:z6Mk..." --reason "key_compromise" → Marks DID as revoked, future envelopes rejected # Query trust registry xlink lookup --did "did:key:z6Mk..." → Returns registration status, scopes, metadata
Docker Deployment
# Pull and run the Xlink identity server docker compose up -d xlink # Verify health curl http://localhost:3300/health # {"status":"ok","version":"0.1.0","uptime":42} # Air-gapped deployment docker save private.me/xlink-cli > xlink-cli.tar # Transfer to air-gapped environment docker load < xlink-cli.tar docker compose up -d
Migration from API Keys
Zero-downtime migration. Gradual traffic shifting. Version negotiation. DualModeAdapter for API key + Xlink hybrid deployments.
Migrating from API key-based authentication to Xlink cryptographic identity is a gradual process. The SDK provides a DualModeAdapter that handles both legacy API key requests and modern Xlink envelope requests simultaneously. Traffic shifts progressively from keys to identity over weeks or months, with zero service interruption.
DualModeAdapter Overview
The DualModeAdapter sits at your service boundary and inspects incoming requests. If the request contains a traditional API key (Authorization header, query parameter, or custom header), it routes to your existing authentication logic. If the request contains a Xlink envelope (Content-Type: application/xlink-envelope), it verifies the signature, checks the nonce, and extracts the payload.
import { DualModeAdapter, Agent } from '@private.me/xlink'; import express from 'express'; // Create Xlink agent for your service const agent = (await Agent.create({ name: 'ProductionAPI', registry, transport })).value!; // Initialize DualModeAdapter with legacy key validator const adapter = new DualModeAdapter({ agent, legacyKeyValidator: async (apiKey) => { // Your existing API key validation logic const user = await db.users.findOne({ apiKey }); return user ? { userId: user.id, scopes: user.scopes } : null; }, mode: 'hybrid', // Accept both keys and Xlink envelopes }); // Express middleware integration const app = express(); app.use(adapter.middleware()); app.post('/api/data', async (req, res) => { // req.auth contains either legacy user data OR Xlink sender DID if (req.auth.mode === 'xlink') { console.log(`Request from Xlink DID: ${req.auth.did}`); } else { console.log(`Legacy API key user: ${req.auth.userId}`); } res.json({ status: 'ok' }); });
req.auth.mode metrics.
Traffic Shifting Strategy
Migration proceeds in three phases: hybrid mode (both keys and Xlink accepted), deprecation warnings (keys still work but clients receive upgrade prompts), and enforcement (keys rejected, Xlink required). The DualModeAdapter mode parameter controls this progression.
| Phase | Mode Setting | API Key Requests | Xlink Requests | Duration |
|---|---|---|---|---|
| 1. Hybrid | hybrid |
Accepted | Accepted | 4-12 weeks |
| 2. Deprecation | deprecation |
Accepted + warning header | Accepted | 4-8 weeks |
| 3. Enforcement | xlink-only |
Rejected (401) | Accepted | Permanent |
// Week 0-12: Hybrid mode (both accepted) adapter.setMode('hybrid'); // Week 12-20: Deprecation warnings adapter.setMode('deprecation'); // API key requests receive X-Deprecated-Auth: "true" header // Response includes upgrade instructions in X-Migration-Url // Week 20+: Xlink enforcement adapter.setMode('xlink-only'); // API key requests return 401 with migration guide URL
Monitor Xlink adoption via adapter.getMetrics(). Once 95%+ of traffic uses Xlink (typically 8-12 weeks in hybrid mode), activate deprecation warnings. After another 4-8 weeks, enforce Xlink-only mode. Adjust timelines based on your client base and communication channels.
Version Negotiation
Xlink envelope versions (v1, v2, v3, Xchange) auto-negotiate based on sender and receiver capabilities. Agents inspect the recipient's registered keys in the trust registry and select the highest mutually supported version. No manual configuration required.
| Sender Capabilities | Receiver Capabilities | Negotiated Version | Security Features |
|---|---|---|---|
| v3 (Ed25519 + X25519 + ML-KEM + ML-DSA) | v3 (Ed25519 + X25519 + ML-KEM + ML-DSA) | v3 | Hybrid PQ KEM + dual signatures |
| v3 | v2 (Ed25519 + X25519 + ML-KEM, no ML-DSA) | v2 | Hybrid PQ KEM + Ed25519 sig only |
| v3 | v1 (Ed25519 + X25519, no PQ) | v1 | Classical crypto only |
| v3 + Xchange opt-in | v3 + Xchange opt-in | Xchange | IT-secure split, no KEM (faster) |
// Agent A: v3-capable (Ed25519 + X25519 + ML-KEM + ML-DSA) const agentA = (await Agent.create({ name: 'ServiceA', registry, transport, postQuantumSig: true, // Enables ML-DSA-65 })).value!; // Agent B: v2-capable (no ML-DSA support) const agentB = (await Agent.create({ name: 'ServiceB', registry, transport, postQuantumSig: false, // Only Ed25519 signatures })).value!; // When A sends to B, SDK auto-negotiates to v2 await agentA.send({ to: agentB.did, payload: data }); // Uses v2 envelope (ML-KEM-768 + Ed25519, no ML-DSA) // When A sends to another v3 agent, SDK uses v3 await agentA.send({ to: v3RecipientDid, payload: data }); // Uses v3 envelope (ML-KEM-768 + Ed25519 + ML-DSA-65)
Zero-Downtime Cutover Procedure
The recommended deployment pattern uses a phased rollout with canary testing and incremental traffic shifting. This procedure assumes a load-balanced multi-instance deployment behind a reverse proxy or API gateway.
Step 1: Canary Deployment (Week 0)
Deploy DualModeAdapter to a single instance (canary). Route 5% of traffic to it via load balancer weights. Monitor error rates, latency, and auth success metrics. If stable for 48 hours, proceed to Step 2.
# Load balancer config (example: nginx upstream weights) upstream api_backend { server instance1.internal:3000 weight=19; # Legacy (95%) server instance2.internal:3000 weight=1; # Canary DualMode (5%) }
Step 2: Gradual Rollout (Week 1-4)
Deploy DualModeAdapter to all instances. Shift traffic incrementally: 10% → 25% → 50% → 75% → 100% over 4 weeks. Monitor Xlink adoption rate. Target: 30-50% of clients using Xlink envelopes by end of Week 4.
# Week 1: 10% DualMode weight=18 (legacy), weight=2 (DualMode) # Week 2: 25% DualMode weight=15 (legacy), weight=5 (DualMode) # Week 3: 50% DualMode weight=10 (legacy), weight=10 (DualMode) # Week 4: 100% DualMode (all instances) all instances running DualModeAdapter in hybrid mode
Step 3: Client Migration (Week 5-12)
Publish Xlink integration guides and SDKs for your client ecosystem. Provide code examples, upgrade paths, and support channels. Track Xlink adoption via adapter.getMetrics(). Target: 80%+ adoption by end of Week 12.
Step 4: Deprecation Warnings (Week 13-20)
Switch DualModeAdapter to deprecation mode. API key requests still succeed but receive X-Deprecated-Auth: true header and migration guide URL. Monitor support tickets and client feedback. Extend timeline if needed.
Step 5: Enforcement (Week 21+)
Once 95%+ of traffic uses Xlink (typically Week 18-20), switch to xlink-only mode. API key requests return 401 Unauthorized with migration instructions. Maintain a support hotline for stragglers. After 4 weeks in enforcement mode, remove legacy API key validation code entirely.
hybrid mode or redeploying legacy instances. Keep legacy authentication logic intact until 4 weeks after enforcement begins. Have a tested rollback plan and practice it in staging.
Monitoring & Metrics
DualModeAdapter exposes real-time metrics for tracking migration progress and identifying adoption blockers.
const metrics = adapter.getMetrics(); console.log(metrics); // { // totalRequests: 142850, // xlinkRequests: 98420, // apiKeyRequests: 44430, // xlinkPercentage: 68.9, // failedAuth: 47, // avgLatencyMs: { xlink: 12.4, apiKey: 8.7 } // } // Export to monitoring system setInterval(() => { const m = adapter.getMetrics(); prometheus.gauge('xlink_adoption_pct').set(m.xlinkPercentage); prometheus.counter('xlink_requests_total').inc(m.xlinkRequests); }, 60000);
Track the xlinkPercentage metric daily. A healthy migration shows steady week-over-week growth: 10% → 25% → 45% → 65% → 85% → 95%. If growth stalls, investigate client-side integration blockers (SDK confusion, documentation gaps, support tickets).
Implementation Details
Deep-dive into error handling, trust registry, nonce store, transport adapters, and the complete ACI surface.
Error Hierarchy
Typed error classes for structured error handling. Supplementary to the Result<T,E> string code pattern.
XlinkError // Base class (code, subCode, docUrl) ├── XlinkIdentityError // Ed25519/X25519 keygen, sign, verify, DID ├── XlinkEnvelopeError // Envelope create, encrypt, decrypt, parse ├── XlinkTransportError // Send failures, network, timeouts ├── XlinkRegistryError // Lookup, registration, revocation ├── XlinkKeyAgreementError // X25519 ECDH derivation ├── XlinkSplitChannelError // XorIDA split, reconstruct, HMAC └── XlinkAgentError // High-level Agent lifecycle
import { toXlinkError, isXlinkError } from '@private.me/xlink'; const result = await agent.receive(envelope); if (!result.ok) { const err = toXlinkError(result.error); console.log(err.code); // 'DECRYPT_FAILED' console.log(err.subCode); // 'KEY_AGREEMENT' console.log(err.docUrl); // 'https://xail.io/docs/packages/xlink#envelope' console.log(err.name); // 'XlinkEnvelopeError' }
Trust Registry
Four implementations covering development through production.
Production Persistence with FileTrustRegistry
For production single-node deployments, FileTrustRegistry provides persistent JSONL-based storage with automatic crash recovery. All operations append to an immutable log, replayed into memory on initialization.
import { Agent, FileTrustRegistry, HttpsTransportAdapter } from '@private.me/xlink'; // JSONL file persists across restarts const registry = new FileTrustRegistry({ path: '/opt/app/trust.jsonl' }); const transport = new HttpsTransportAdapter({ baseUrl: 'https://api.example.com' }); const agent = (await Agent.create({ name: 'ProductionService', registry, transport })).value!; // Registry automatically persists all add/update/remove operations // Survives process restarts, crashes, and power failures
Enterprise Multi-Backend Factory
For enterprise deployments requiring centralized trust management with local fallback, createEnterpriseTrustRegistry() provides a factory function that combines HTTP primary + File fallback in a single interface.
import { createEnterpriseTrustRegistry } from '@private.me/xlink'; // Primary: centralized HTTP registry // Fallback: local JSONL file when HTTP unreachable const registry = createEnterpriseTrustRegistry({ http: { baseUrl: 'https://trust.corp.example.com', authToken: process.env.TRUST_TOKEN }, file: { path: '/var/lib/trust-fallback.jsonl' } }); // Reads try HTTP first, fall back to file on network failure // Writes go to both backends for redundancy
Nonce Store & Anti-Replay Protection
Replay prevention via unique nonce tracking. Every envelope carries a cryptographically random 16-byte nonce generated via crypto.getRandomValues().
Nonce-Based Replay Attack Prevention
A nonce (number used once) is a cryptographic value that ties each envelope to a single use. When an envelope arrives, the receiver checks whether its nonce has been seen before. If the nonce exists in the store, the envelope is rejected with REPLAY_DETECTED — preventing an attacker from capturing a valid envelope and re-sending it to execute the same action twice. This pattern is based on established cryptographic nonce practices used in OAuth 2.0, OIDC, and blockchain protocols.
| Step | Actor | Action | Defense |
|---|---|---|---|
| 1. Generate | Sender | 16-byte nonce via crypto.getRandomValues() | High-entropy random |
| 2. Include | Sender | Nonce embedded in envelope (signed) | Integrity-protected |
| 3. Check | Receiver | NonceStore.check(nonce, senderDid) | Atomic set-if-not-exists |
| 4. Store | Receiver | Nonce stored with TTL expiry | Time-bound memory usage |
| 5. Replay | Attacker | Re-sends captured envelope | Rejected (duplicate nonce) |
Implementation Options
import { RedisNonceStore } from '@private.me/xlink'; import Redis from 'ioredis'; const redis = new Redis({ host: 'redis.example.com' }); const nonceStore = new RedisNonceStore({ client: redis, ttlSeconds: 600, // 10 minutes (default) keyPrefix: 'nonce:', // Redis key namespace }); const agent = await Agent.create({ name: 'DistributedAgent', registry, transport, nonceStore, // Cross-node protection });
Production deployments should use RedisNonceStore or an equivalent distributed store. Memory-based stores clear on process restart, allowing replay attacks within the timestamp window until the TTL expires. Redis provides atomic SET NX EX semantics, ensuring that even under high concurrency, a nonce is either accepted once or rejected as a duplicate — no race conditions.
Transport Adapters
Pluggable delivery mechanism. Two built-in adapters plus a custom interface.
interface XailTransportAdapter { send(envelope: TransportEnvelope, to: string): Promise<Result<void, TransportError>>; onReceive(handler: (envelope: TransportEnvelope) => void): void; dispose(): void; }
Full ACI Surface
Complete Authenticated Cryptographic Interface organized by module.
Agent
Generate identity, register with registry, wire transport. Primary factory.
Restore from persisted PKCS8 identity. Skips keygen + registration.
Synchronous construction from pre-built components. No async, no Result.
Deterministic identity from 32-byte seed via HKDF-SHA256. IoT factory provisioning.
Encrypt, sign, and deliver. Auto ECDH when available. Automatic split-channel for high-risk operations.
Verify version, timestamp, nonce, sender, signature, scope. Decrypt. Parse.
Create signed envelope without sending. Used for ACI purchasing and deferred delivery scenarios. Envelope can be serialized and transmitted via external transport.
ACI Purchase Endpoint
Purchase ACI using xLink envelope or email. Accepts signed envelopes for identity-based purchasing (60 req/min rate limit with X-Client-Type: ai-agent header). Email-based fallback available (10 req/min). Returns subscriptionId, connectionId, and xlinkAuthenticated flag.
Identity
Ed25519 + X25519 keypair generation via Web Crypto API. ML-DSA-65 keys when postQuantumSig: true.
PKCS8 DER export/import for identity persistence.
HKDF-SHA256 deterministic derivation from 32-byte seed.
Envelope
Encrypted or signed-only envelope creation.
Validation and deserialization from unknown input.
Split-Channel
XorIDA split with HMAC integrity. Default: 3 shares, threshold 2.
Reconstruct from K+ shares. HMAC verified before returning plaintext.
Key Agreement
Hybrid X25519 + ML-KEM-768 key agreement with ephemeral key pair per message.
Error Taxonomy
Complete error code table with sub-codes and error classes.
| Code | Class | When |
|---|---|---|
| IDENTITY_FAILED | Agent | Identity generation fails during create() |
| REGISTRATION_FAILED | Agent | Registry rejects registration |
| REGISTRATION_FAILED:ALREADY_REGISTERED | Agent | DID already exists in registry |
| VERIFICATION_FAILED:SIGNATURE_MISMATCH | Agent | Ed25519 signature does not match |
| VERIFICATION_FAILED:DID_NOT_IN_REGISTRY | Agent | Sender DID not found in registry |
| REPLAY_DETECTED | Agent | Duplicate nonce (replay attack) |
| TIMESTAMP_EXPIRED | Agent | Envelope outside timestamp window |
| SCOPE_DENIED | Agent | Sender lacks required scope |
| DECRYPT_FAILED:KEY_AGREEMENT | Envelope | ECDH derivation fails |
| DECRYPT_FAILED:DECRYPTION | Envelope | AES-GCM decryption fails |
| DECRYPT_FAILED:PARSE | Envelope | Decrypted bytes not valid JSON |
| SEND_FAILED:BELOW_THRESHOLD | Transport | Split: fewer than K shares delivered |
| HMAC_VERIFICATION_FAILED | SplitChannel | Share HMAC check fails |
| INSUFFICIENT_SHARES | SplitChannel | Fewer than threshold shares |
| INCONSISTENT_SHARES | SplitChannel | Mismatched groupId or params |
| INVALID_KEY_LENGTH:EXPECTED_32 | KeyAgreement | X25519 key not 32 bytes |
Codebase Stats
Xlink v0.1.0 — Gold Standard Bronze tier achieved.
Module Inventory
| Module | Source File | Purpose |
|---|---|---|
| Identity | identity.ts | Ed25519 + X25519 keygen, DID, PKCS8, sign/verify |
| Envelope | envelope.ts | v1 create/decrypt/serialize/validate |
| Agent | agent.ts | Top-level ACI (create, send, receive, middleware) |
| Split-Channel | split-channel.ts | XorIDA bridge: split, reconstruct, HMAC |
| Key Agreement | key-agreement.ts | X25519 ECDH ephemeral key agreement |
| Nonce Store | nonce-store.ts | MemoryNonceStore for replay prevention |
| Redis Nonce | redis-nonce-store.ts | RedisNonceStore for distributed deployments |
| Trust Registry | trust-registry.ts | Memory + HTTP registry |
| DID:web | did-web.ts | W3C did:web resolver |
| Transport | transport.ts | Interface + HTTPS adapter |
| Gateway | gateway-transport.ts | Xail inbox gateway delivery |
| Middleware | registry-middleware.ts | Express auth middleware for registry |
| Errors | errors.ts | Error class hierarchy (XlinkError + 7) |
| Verify | verify.ts | Lightweight verify-only sub-path |
Deployment Options
SaaS Recommended
Fully managed infrastructure. Call our REST API, we handle scaling, updates, and operations.
- Zero infrastructure setup
- Automatic updates
- 99.9% uptime SLA
- 3-month free trial
SDK Integration
Embed directly in your application. Runs in your codebase with full programmatic control.
npm install @private.me/xlink- TypeScript/JavaScript SDK
- Full source access
- Enterprise support available
On-Premise Enterprise
Self-hosted infrastructure for air-gapped, compliance, or data residency requirements.
- Complete data sovereignty
- Air-gap capable
- Docker + Kubernetes ready
- RBAC + audit logs included
Structured Returns
All xLink operations return structured objects with multiple format representations, replacing prose strings. This dramatically improves token efficiency when using LLM-assisted development tools and AI coding assistants.
Security Mode Descriptions
The describeSecurityModeStructured() function returns comprehensive
security mode information in multiple formats:
import { describeSecurityModeStructured } from '@private.me/xlink';
const mode = {
type: 'split',
shares: { total: 3, threshold: 2 },
level: 'high'
};
const description = describeSecurityModeStructured(mode);
// Choose format based on context:
console.log(description.formats.singleline);
// → "high | split | 2-of-3"
console.log(description.formats.multiline);
// → Multi-line human-readable display
console.log(description.formats.json);
// → {"type":"split","level":"high","shares":{"total":3,"threshold":2}}
console.log(description.formats.markdown);
// → **Security Level:** High
// **Mode:** Split (threshold sharing)
// **Configuration:** 2-of-3
Return Type
interface SecurityModeDescription {
readonly type: 'standard' | 'split' | 'xchange';
readonly level: 'standard' | 'high' | 'critical' | 'performance';
readonly shares?: {
readonly total: number;
readonly threshold: number;
};
readonly formats: FormattedOutput;
}
interface FormattedOutput {
readonly multiline: string; // Human-readable display
readonly singleline: string; // Compact logs (10x token savings)
readonly json: string; // Machine-readable APIs
readonly markdown: string; // Documentation/reports
}
Error Formatting
All xLink errors use formatErrorStructured() from
@private.me/ux-helpers:
import { formatErrorStructured } from '@private.me/ux-helpers';
const result = await agent.send({ to, payload });
if (!result.ok) {
const formatted = formatErrorStructured(result.error);
// Log format (compact)
logger.error(formatted.formats.singleline);
// UI display (human-readable)
displayError(formatted.formats.multiline);
// API response (machine-readable)
res.json(JSON.parse(formatted.formats.json));
}
Benefits
- 10x token savings - LLMs parse structured data vs prose (200 bytes vs 2KB)
- Programmatic access - Extract specific fields without string parsing
- Consistent formatting - Same structure across all operations
- Context-aware - Choose format based on output destination
- Type-safe - TypeScript validates usage at compile time
Migration from Prose
The original describeSecurityMode() is deprecated but maintained
for backward compatibility:
// Old (deprecated)
const description: string = describeSecurityMode(mode);
// → "High security: 2-of-3 threshold sharing with 3 total shares"
// New (recommended)
const description = describeSecurityModeStructured(mode);
const prose = description.formats.multiline; // Same content, structured access
When to Use
- Logs: Use
.formats.singlelinefor compact, parseable logs - UI Display: Use
.formats.multilinefor human-readable messages - APIs: Use
.formats.jsonfor machine-to-machine communication - Documentation: Use
.formats.markdownfor reports/exports - Debugging: Access raw fields (
type,level,shares)
Deployment Model
xLink is an SDK package that you integrate into your applications. Add it to your project via npm/pnpm, then deploy on your own infrastructure. No hosted services, no vendor lock-in—you control where your code runs.
Use in Your Applications
Install via package manager, integrate into your codebase, and deploy wherever you run your services—AWS Lambda, Google Cloud Functions, on-premise servers, Kubernetes clusters, or edge workers. The subscription covers your right to use the SDK in production; you manage the infrastructure.
Pricing
All private.me platform services follow a simple tier-based subscription model. Free trial: 3 months with full access to all features — no credit card required.
- Unlimited connections
- Core authentication features
- Standard trust registries
- Community support
- SDK access
- Everything in Basic, plus:
- Priority support
- Advanced analytics
- SLA: 99.9% uptime
- Integration assistance
- Everything in Middle, plus:
- Enterprise CLI & governance
- Audit logs & compliance
- On-premise deployment
- Dedicated support
- Custom SLAs