Loading...
private.me Docs
Get xStore
PRIVATE.ME · Technical White Paper

xStore: Universal Split-Storage

One interface for multi-cloud storage with information-theoretic security. xStore is the universal split-storage layer — XorIDA-split every write, HMAC-verify every read, route shares to pluggable backends. No single backend ever holds enough to reconstruct.

v0.1.0 65 tests passing 6 modules 4 test files ~1ms split + route k-of-n configurable

Fast Onboarding: 3 Acceleration Levels

From zero-click code patterns to one-click deploy buttons, xStore offers three acceleration levels that reduce setup time from ~2 minutes to as low as ~15 seconds. Each level targets a different deployment context.

Level 1: Zero-Click Accept (Lazy Store Initialization)

For trusted environments, the lazy store pattern defers backend configuration until the first storage operation. No explicit init() call required — the store automatically configures its backends on first use and auto-accepts backend connections from known partners.

Zero-click lazy initialization
import { XStore } from '@private.me/xstore';

// Store configures backends on first operation
const store = await XStore.lazy({
  name: 'my-storage'
});

// First put() triggers backend initialization automatically
const data = new TextEncoder().encode('Sensitive data');
await store.put('my-key', data);

// Retrieve data (reconstruction happens transparently)
const result = await store.get('my-key');

Setup time: ~80 seconds (backend initialization happens transparently during first operation)
Best for: Internal services, trusted partner integrations, development/testing

Level 2: One-Line CLI Setup (Invite-Based)

For production deployments, the one-line CLI command accepts an invite code and automatically configures the entire backend topology, credentials, and threshold settings in a single operation. No manual backend configuration, no credential juggling.

One-line invite acceptance
// Partner sends you invite code: XST-abc123def456
$ npx @private.me/xstore-cli init --invite XST-abc123def456

{
  "status": "initialized",
  "name": "my-storage",
  "backends": [
    {
      "id": "s3-us-east",
      "type": "s3",
      "region": "us-east-1"
    },
    {
      "id": "azure-eu-west",
      "type": "azure-blob",
      "region": "westeurope"
    },
    {
      "id": "local-fs",
      "type": "filesystem",
      "path": "/var/xstore/shares"
    }
  ],
  "threshold": 2,
  "totalShares": 3,
  "elapsed_seconds": 2.3
}

Setup time: ~40 seconds (single command, no manual configuration)
Best for: Production deployments, multi-cloud setups, compliance-driven topologies

Level 3: Deploy Button (One-Click Infrastructure)

For platform integrations, deploy buttons provide one-click infrastructure provisioning with xStore pre-configured. Clicking the button deploys a complete service with backend topology, credentials, and threshold configuration already set.

Deploy button (Vercel example)
<!-- Add to your README.md or integration docs -->
[![Deploy to Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fprivate-me%2Fxstore-starter&env=XSTORE_INVITE_CODE&envDescription=Paste%20your%20xStore%20invite%20code%20here&envLink=https%3A%2F%2Fprivate.me%2Fdocs%2Fxstore)

// Deployed service includes:
// - Auto-configured backend topology (S3, Azure, IPFS)
// - Pre-configured threshold settings from invite
// - API endpoints for put/get operations
// - Health check and monitoring

Setup time: ~15 seconds (one click, zero configuration)
Best for: SaaS integrations, marketplace apps, rapid prototyping

Setup Time Comparison

Method Setup Time Steps Required Configuration Best For
Manual (Traditional) ~2 minutes 3 (config + backends + code) Manual backend setup Full control, learning
Level 1: Zero-Click ~80 seconds 0 (lazy init) Auto-accept from env Trusted integrations
Level 2: One-Line CLI ~40 seconds 1 (accept invite) Invite code only Production deployments
Level 3: Deploy Button ~15 seconds 1 (click button) Zero configuration SaaS integrations
ACCELERATION MULTIPLIER
Deploy buttons reduce setup time by 8× compared to manual initialization (15 seconds vs 2 minutes). For multi-cloud deployments requiring S3 + Azure + IPFS configuration (traditionally 30-45 minutes), this creates a total acceleration of 120-180× for platform integrations.

Getting Started: Fastest Path

The recommended onboarding path depends on your deployment context:

  1. New to xStore? Start with a Deploy Button to see xStore running in production in ~15 seconds. Deploy the starter template, then explore the code.
  2. Production deployment? Use the One-Line CLI with an invite code. One command configures your entire backend topology in ~40 seconds.
  3. Internal services? Use Zero-Click Accept with lazy initialization. No setup time, backend configuration happens transparently on first use.
  4. Want full control? Follow the standard 2-minute setup flow with explicit backend configuration.
Fastest path: Deploy button first, explore code second
// 1. Click "Deploy to Vercel" button (~15 sec)
// 2. Paste invite code when prompted
// 3. Service deploys with xStore pre-configured
// 4. Clone the deployed repo to see the code:

$ git clone https://github.com/your-username/deployed-xstore-service
$ cd deployed-xstore-service
$ cat src/index.ts

// You'll see the lazy store pattern in action:
const store = await XStore.lazy({ name: 'deployed-storage' });
app.post('/store', async (req, res) => {
  const result = await store.put(req.body.key, req.body.data);
  res.json(result);
});
Section 01

Executive Summary

Every ACI reinvents storage. xStore eliminates the problem by providing a universal split-storage layer that abstracts XorIDA share persistence behind a pluggable backend interface.

Two functions cover 80% of use cases: store.put() splits data via XorIDA into k-of-n shares, computes HMAC-SHA256 per share, routes each to a designated backend (AWS S3, Azure Blob, filesystem, or custom), and returns a manifest with backend routing + integrity tags. store.get() fetches k shares, verifies HMAC on each before reconstruction, XorIDA-reconstructs the data, and returns the original byte-identical payload.

No keys. No rotation. No cross-backend consistency logic. xStore handles the storage orchestration; your application handles the business logic. Move from S3 to GCP by swapping a backend implementation — zero application code changes. Store Share 1 in the EU and Share 2 in the US for compliance. Keep Share 3 on an air-gapped filesystem for disaster recovery.

One universal abstraction. Backends are pluggable. Manifests are portable. Integrity is cryptographic. Threshold is configurable (k-of-n). Zero npm dependencies at the core.

Section 02

Developer Experience

xStore provides progress callbacks for multi-backend visibility, structured error codes with recovery guidance, and a minimal API surface. Track uploads across regions. Handle failures gracefully. Debug storage orchestration with confidence.

Progress Callbacks

Every put() and get() operation accepts optional progress callbacks that emit per-backend status updates. Real-time visibility into which shares are being written to which backends.

Track multi-cloud upload progress
const result = await store.put('large-dataset', data, {
  onProgress: (event) => {
    console.log(`Backend ${event.backendId}: share ${event.shareIndex}/${event.totalShares}`);
    console.log(`Status: ${event.status} | ${event.bytesTransferred}/${event.totalBytes} bytes`);
  }
});

Error Codes

15+ error codes across 5 categories: Backend Management, Share Operations, Data Operations, and more. Every error includes code, message, and recovery guidance.

Category Error Code Recovery
Backend BACKEND_UNAVAILABLE Retry with backoff. Failover to other backends.
BACKEND_QUOTA_EXCEEDED Add capacity or configure TTL expiry.
Share Ops SHARE_INTEGRITY_FAILED HMAC verification failed. Share was tampered with.
INSUFFICIENT_SHARES Fewer than k shares available. Check backend health.
Data Ops MANIFEST_INVALID Manifest corrupted or tampered. Re-upload data.
PAYLOAD_HMAC_FAILED Data tampered with. Wrong HMAC key or share corruption.
Section 03

The Problem

Every ACI that splits data with XorIDA writes its own storage logic. There is no standard interface, no backend portability, and no cross-ACI reuse.

When data is split into k-of-n shares, those shares must be stored across independent backends. Today, each application hardcodes where shares go — one app stores on S3, another on local disk, another in a database. There is no common abstraction, no manifest format, no integrity verification at the storage layer.

This creates three problems: Backend lock-in (moving from AWS to Azure means rewriting storage code), No portability (shares from one ACI cannot be consumed by another without bespoke integration), and No integrity guarantees (corrupted shares are discovered only at reconstruction time, when it is too late).

The xStore Solution
One pluggable backend interface. Multi-backend routing. Manifest-based metadata. HMAC integrity on every share. Write once, store anywhere. Move to a new backend by changing configuration, not code.
Section 04

Use Cases

INFRASTRUCTURE
Multi-Cloud

Store shares across AWS, Azure, and GCP. No single cloud provider sees your data. Compromised cloud account reveals only cryptographic noise.

Cloud Agnostic
COMPLIANCE
Multi-Jurisdiction

EU data stays in EU backends. US data stays in US backends. Satisfy GDPR, CCPA, and data residency with one storage layer.

Data Residency
SECURITY
Air-Gapped Backup

Offline filesystem backends for classified data. Shares never touch a network. Combine with online backends for hybrid resilience.

Air-Gapped
DATA ASSETS
Anti-Piracy Protection

Split proprietary datasets across independent backends to prevent database exfiltration. No single backend breach yields usable data.

IP Protection
Section 05

Architecture

Five-stage pipeline: pad, authenticate, split, route, store. Every stage independently verifiable. Manifest ties it all together.

XSTORE PIPELINE DATA PKCS7 PAD HMAC SHA-256 XorIDA SPLIT BACKEND 1 e.g. AWS S3 BACKEND 2 e.g. Azure Blob BACKEND N e.g. Local FS MANIFEST threshold, share HMACs, backend refs, created
~14µs
64B split time
k-of-n
Configurable threshold
0 bits
Per-share leakage
2x HMAC
Share + payload integrity

Pluggable Backend Interface

The StorageBackend interface defines three methods: put, get, and delete. Any storage system implementing this interface can serve as a share backend. Zero application-layer changes to swap backends.

StorageBackend interface
interface StorageBackend {
  readonly id: string;
  put(key: string, data: Uint8Array): Promise<Result<void>>;
  get(key: string): Promise<Result<Uint8Array>>;
  delete(key: string): Promise<Result<void>>;
}

ACI Interface

createStore(config: StoreConfig): XStore
Creates an xStore instance with an array of backends and threshold configuration (k, n). Validates that backends.length >= n. Returns an XStore with put(), get(), and delete() methods.
store.put(key: string, data: Uint8Array): Promise<Result<Manifest>>
Pads data (PKCS7), computes HMAC-SHA256 over padded payload, splits via XorIDA into n shares, routes each share to its designated backend, and returns a manifest containing per-share HMAC tags, backend references, threshold parameters, and creation timestamp.
store.get(manifest: Manifest): Promise<Result<Uint8Array>>
Reads k shares from backends (referenced in manifest), verifies HMAC-SHA256 on each share before reconstruction, reconstructs original data via XorIDA, strips PKCS7 padding, and verifies the whole-payload HMAC. Returns the original byte-identical data or a typed error.
store.delete(manifest: Manifest): Promise<Result<void>>
Deletes all shares referenced in the manifest from their respective backends. Idempotent — succeeds even if some shares are already gone.
Section 06

Complete Flow: Multi-Cloud 2-of-3 Storage

This example demonstrates xStore's core value: one interface, multi-backend orchestration, information-theoretic security.

Multi-cloud storage with progress tracking
import { createStore } from '@private.me/xstore';

// Configure three cloud backends
const s3 = new S3Backend({ region: 'us-east-1', bucket: 'shares' });
const azure = new AzureBlobBackend({ account: 'eu-storage', container: 'shares' });
const gcp = new GCPStorageBackend({ project: 'xstore', bucket: 'apac-shares' });

// Universal split-storage: 2-of-3 threshold
const store = createStore({
  backends: [s3, azure, gcp],
  threshold: { k: 2, n: 3 },
});

// Store 10MB dataset
const data = new Uint8Array(1024 * 1024 * 10);
const putResult = await store.put('patient-genome', data, {
  onProgress: (event) => {
    console.log(`Backend ${event.backendId}: ${event.status} | ${event.bytesTransferred}B`);
  }
});

const manifest = putResult.value;

// Later: retrieve from any 2 of 3 backends
const getResult = await store.get(manifest);
// getResult.value is byte-identical to data, HMAC-verified
Design in Action
The same code works with S3, Azure Blob, GCP Cloud Storage, filesystem, or custom backends. No single cloud provider ever sees enough to reconstruct. Even if one backend fails, two remaining shares still reconstruct. Progress callbacks provide real-time visibility. Error codes enable graceful degradation.
Section 07

Security Properties

xStore inherits XorIDA's information-theoretic guarantees and adds storage-layer integrity, manifest authentication, and backend isolation.

PropertyMechanismGuarantee
ConfidentialityXorIDA k-of-n splittingAny k-1 shares reveal zero information (information-theoretic)
Integrity (share)HMAC-SHA256 per shareTampered shares rejected before reconstruction
Integrity (payload)HMAC-SHA256 over padded dataWhole-payload verification after reconstruction
Backend IsolationOne share per backendCompromising one backend yields only noise
TTL ExpiryManifest timestamp + TTLExpired manifests rejected; backends can garbage-collect
HMAC Before Reconstruction
Every share's HMAC-SHA256 tag is verified before reconstruction, not after. Tampered or corrupted shares are rejected at the storage layer. This is a hard rule — no share enters XorIDA without passing integrity verification.

vs. Traditional Encryption

DimensionEncrypted Blob StoragexStore Split-Storage
Key ManagementRequires secure key storage and rotationNo keys — the split IS the security
Single Point of TrustOne key compromise = full plaintextMust compromise k-of-n independent backends
Quantum ResistanceAES vulnerable to Grover's algorithmInformation-theoretic — unconditionally secure
Backend PortabilityRe-encrypt on migrationSwap backend, keep shares
Section 08

Benchmarks

Performance characteristics measured on Node.js 22, Apple M2. xStore adds minimal overhead while eliminating single-point-of-compromise risk.

65
Test Cases
~1ms
Split + Route
~1ms
Reconstruct
100%
Coverage
OperationTimeNotes
XorIDA split (1 KB)~58µs2-of-2 threshold over GF(2)
HMAC-SHA256 per share<0.1msIntegrity verification before routing
Route to local filesystem<1msDirect write
Route to S3~50–200msNetwork-dependent, regional
HMAC verify + reconstruct<1msVerify before XOR reconstruction
Full pipeline (local)~1msSplit → HMAC → route → verify
Full pipeline (cloud)~200–400msDominated by network round-trip
Storage Trade-off
xStore doubles storage costs for 2-of-2 (two full-size shares). For 2-of-3, storage is 3x. The security benefit — eliminating single-point-of-compromise and achieving information-theoretic protection — is the explicit trade-off.
Section 09

Honest Limitations

Five known limitations documented transparently. xStore trades storage efficiency for security guarantees.

LimitationImpactMitigation
Network latency for remote backendsCloud backends add 50–200ms per operation. Slowest backend determines overall latency.Parallel retrieval minimizes impact. Local caching of frequently-accessed shares. Use local backends for latency-critical workloads.
No built-in replicationxStore routes shares but does not replicate within a backend. If a backend loses data, the share is gone.Cloud backends (S3, Azure) provide their own replication. For local backends, use filesystem-level RAID. 2-of-3 configurations provide share-level redundancy.
Custom adapter required per backendEach new storage backend requires implementing the StorageBackend interface.The interface is minimal (put/get/delete). Reference adapters serve as templates. Community adapters can be contributed.
No streaming for large filesEntire file must fit in memory for XorIDA splitting. Files larger than RAM cannot be processed.Chunk large files before splitting. xStore can store each chunk independently with sequential IDs. Reassemble after reconstruction.
No cross-backend atomic writesxStore does not guarantee atomic writes across backends. A failure mid-write can leave shares in an inconsistent state.HMAC verification detects incomplete writes. Implement write-ahead logging for critical data. Retrieval fails safely if any share is missing or corrupted.
Section 10

Cross-ACI Composition

xStore is the storage layer that other ACIs build on. Instead of each product reinventing share persistence, they delegate to xStore and focus on domain logic.

XGHOST + XSTORE
xGhost splits algorithms into shares for ephemeral protection. xStore handles where those shares live — vendor share on S3, customer share in npm, backup share on air-gapped storage.
XPROVE + XSTORE
xProve generates cryptographic proofs. Proof artifacts can be large. xStore persists proof bundles across threshold-split, tamper-evident storage.
VAULTDROP + XSTORE
VaultDrop provides encrypted backup workflows. Under the hood, VaultDrop uses xStore to persist backup shares across geographically distributed backends. User sees "backup to cloud." System stores k-of-n shares across three continents.
Section 11

Get Started

Install xStore, implement a backend, and start storing threshold-split data in minutes.

Install
npm install @private.me/xstore
Quick Start
import { createStore } from '@private.me/xstore';
import { S3Backend, AzureBackend, FsBackend } from './backends';

const store = createStore({
  backends: [
    new S3Backend({ bucket: 'shares-us-east' }),
    new AzureBackend({ container: 'shares-eu-west' }),
    new FsBackend({ path: '/mnt/airgap/shares' }),
  ],
  threshold: { k: 2, n: 3 },
});

const data = new TextEncoder().encode('patient record');
const { value: manifest } = await store.put('record-123', data);
const { value: restored } = await store.get(manifest);
// restored is byte-identical to data
Advanced Topics

Enterprise Deployment & Verification

Verifiable code execution, enterprise CLI, and proof-based auditing for regulated industries.

Section 12a

Verifiable Split-Storage

Every xStore operation produces integrity artifacts that xProve can chain into a verifiable audit trail. Prove correct storage without revealing the data itself.

XPROVE STORAGE AUDIT
xStore's per-share HMAC tags and manifest checksums feed directly into xProve's HMAC chain verification. For regulated industries, this provides cryptographic proof of data integrity at rest — verifiable correct storage without data disclosure.
Section 12b

Enterprise CLI

Self-hosted split-storage server. Deploy xStore on your own infrastructure with Docker-ready, air-gapped capable deployment.

xstore-cli — Port 5000
@private.me/xstore-cli wraps xStore in a standalone HTTP server with 12 REST endpoints. 65 tests passing. Store instance management, data put/get/delete/list with pluggable backends (memory, filesystem). XorIDA-split shares, HMAC verification, namespace isolation, TTL expiry. 3-role RBAC (admin/operator/auditor), JSONL audit log, AES-256-GCM at rest.
READY TO DEPLOY?

xStore Universal Split-Storage

Talk to Sol, our AI platform engineer, or book a live demo with our team.

Book a Demo

Deployment Options

SaaS Recommended

Fully managed infrastructure. Call our REST API, we handle scaling, updates, and operations.

  • Zero infrastructure setup
  • Automatic updates
  • 99.9% uptime SLA
  • Pay per use
View Pricing →

SDK Integration

integrate into your application for split-custody storage. Add xStore to your project to distribute data shares across multiple backends with XorIDA security.

  • npm install @private.me/xstore
  • TypeScript/JavaScript SDK
  • Full source access
  • Enterprise support available
Get Started →

On-Premise Enterprise

Self-hosted infrastructure for air-gapped, compliance, or data residency requirements.

  • Complete data sovereignty
  • Air-gap capable
  • Docker + Kubernetes ready
  • RBAC + audit logs included
Enterprise CLI →