xStore: Universal Split-Storage
One interface for multi-cloud storage with information-theoretic security. xStore is the universal split-storage layer — XorIDA-split every write, HMAC-verify every read, route shares to pluggable backends. No single backend ever holds enough to reconstruct.
Fast Onboarding: 3 Acceleration Levels
From zero-click code patterns to one-click deploy buttons, xStore offers three acceleration levels that reduce setup time from ~2 minutes to as low as ~15 seconds. Each level targets a different deployment context.
Level 1: Zero-Click Accept (Lazy Store Initialization)
For trusted environments, the lazy store pattern defers backend configuration until the first storage operation. No explicit init() call required — the store automatically configures its backends on first use and auto-accepts backend connections from known partners.
import { XStore } from '@private.me/xstore'; // Store configures backends on first operation const store = await XStore.lazy({ name: 'my-storage' }); // First put() triggers backend initialization automatically const data = new TextEncoder().encode('Sensitive data'); await store.put('my-key', data); // Retrieve data (reconstruction happens transparently) const result = await store.get('my-key');
Setup time: ~80 seconds (backend initialization happens transparently during first operation)
Best for: Internal services, trusted partner integrations, development/testing
Level 2: One-Line CLI Setup (Invite-Based)
For production deployments, the one-line CLI command accepts an invite code and automatically configures the entire backend topology, credentials, and threshold settings in a single operation. No manual backend configuration, no credential juggling.
// Partner sends you invite code: XST-abc123def456 $ npx @private.me/xstore-cli init --invite XST-abc123def456 { "status": "initialized", "name": "my-storage", "backends": [ { "id": "s3-us-east", "type": "s3", "region": "us-east-1" }, { "id": "azure-eu-west", "type": "azure-blob", "region": "westeurope" }, { "id": "local-fs", "type": "filesystem", "path": "/var/xstore/shares" } ], "threshold": 2, "totalShares": 3, "elapsed_seconds": 2.3 }
Setup time: ~40 seconds (single command, no manual configuration)
Best for: Production deployments, multi-cloud setups, compliance-driven topologies
Level 3: Deploy Button (One-Click Infrastructure)
For platform integrations, deploy buttons provide one-click infrastructure provisioning with xStore pre-configured. Clicking the button deploys a complete service with backend topology, credentials, and threshold configuration already set.
<!-- Add to your README.md or integration docs --> [](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fprivate-me%2Fxstore-starter&env=XSTORE_INVITE_CODE&envDescription=Paste%20your%20xStore%20invite%20code%20here&envLink=https%3A%2F%2Fprivate.me%2Fdocs%2Fxstore) // Deployed service includes: // - Auto-configured backend topology (S3, Azure, IPFS) // - Pre-configured threshold settings from invite // - API endpoints for put/get operations // - Health check and monitoring
Setup time: ~15 seconds (one click, zero configuration)
Best for: SaaS integrations, marketplace apps, rapid prototyping
Setup Time Comparison
| Method | Setup Time | Steps Required | Configuration | Best For |
|---|---|---|---|---|
| Manual (Traditional) | ~2 minutes | 3 (config + backends + code) | Manual backend setup | Full control, learning |
| Level 1: Zero-Click | ~80 seconds | 0 (lazy init) | Auto-accept from env | Trusted integrations |
| Level 2: One-Line CLI | ~40 seconds | 1 (accept invite) | Invite code only | Production deployments |
| Level 3: Deploy Button | ~15 seconds | 1 (click button) | Zero configuration | SaaS integrations |
Getting Started: Fastest Path
The recommended onboarding path depends on your deployment context:
- New to xStore? Start with a Deploy Button to see xStore running in production in ~15 seconds. Deploy the starter template, then explore the code.
- Production deployment? Use the One-Line CLI with an invite code. One command configures your entire backend topology in ~40 seconds.
- Internal services? Use Zero-Click Accept with lazy initialization. No setup time, backend configuration happens transparently on first use.
- Want full control? Follow the standard 2-minute setup flow with explicit backend configuration.
// 1. Click "Deploy to Vercel" button (~15 sec) // 2. Paste invite code when prompted // 3. Service deploys with xStore pre-configured // 4. Clone the deployed repo to see the code: $ git clone https://github.com/your-username/deployed-xstore-service $ cd deployed-xstore-service $ cat src/index.ts // You'll see the lazy store pattern in action: const store = await XStore.lazy({ name: 'deployed-storage' }); app.post('/store', async (req, res) => { const result = await store.put(req.body.key, req.body.data); res.json(result); });
Executive Summary
Every ACI reinvents storage. xStore eliminates the problem by providing a universal split-storage layer that abstracts XorIDA share persistence behind a pluggable backend interface.
Two functions cover 80% of use cases: store.put() splits data via XorIDA into k-of-n shares, computes HMAC-SHA256 per share, routes each to a designated backend (AWS S3, Azure Blob, filesystem, or custom), and returns a manifest with backend routing + integrity tags. store.get() fetches k shares, verifies HMAC on each before reconstruction, XorIDA-reconstructs the data, and returns the original byte-identical payload.
No keys. No rotation. No cross-backend consistency logic. xStore handles the storage orchestration; your application handles the business logic. Move from S3 to GCP by swapping a backend implementation — zero application code changes. Store Share 1 in the EU and Share 2 in the US for compliance. Keep Share 3 on an air-gapped filesystem for disaster recovery.
One universal abstraction. Backends are pluggable. Manifests are portable. Integrity is cryptographic. Threshold is configurable (k-of-n). Zero npm dependencies at the core.
Developer Experience
xStore provides progress callbacks for multi-backend visibility, structured error codes with recovery guidance, and a minimal API surface. Track uploads across regions. Handle failures gracefully. Debug storage orchestration with confidence.
Progress Callbacks
Every put() and get() operation accepts optional progress callbacks that emit per-backend status updates. Real-time visibility into which shares are being written to which backends.
const result = await store.put('large-dataset', data, {
onProgress: (event) => {
console.log(`Backend ${event.backendId}: share ${event.shareIndex}/${event.totalShares}`);
console.log(`Status: ${event.status} | ${event.bytesTransferred}/${event.totalBytes} bytes`);
}
});
Error Codes
15+ error codes across 5 categories: Backend Management, Share Operations, Data Operations, and more. Every error includes code, message, and recovery guidance.
| Category | Error Code | Recovery |
|---|---|---|
| Backend | BACKEND_UNAVAILABLE |
Retry with backoff. Failover to other backends. |
BACKEND_QUOTA_EXCEEDED |
Add capacity or configure TTL expiry. | |
| Share Ops | SHARE_INTEGRITY_FAILED |
HMAC verification failed. Share was tampered with. |
INSUFFICIENT_SHARES |
Fewer than k shares available. Check backend health. | |
| Data Ops | MANIFEST_INVALID |
Manifest corrupted or tampered. Re-upload data. |
PAYLOAD_HMAC_FAILED |
Data tampered with. Wrong HMAC key or share corruption. |
The Problem
Every ACI that splits data with XorIDA writes its own storage logic. There is no standard interface, no backend portability, and no cross-ACI reuse.
When data is split into k-of-n shares, those shares must be stored across independent backends. Today, each application hardcodes where shares go — one app stores on S3, another on local disk, another in a database. There is no common abstraction, no manifest format, no integrity verification at the storage layer.
This creates three problems: Backend lock-in (moving from AWS to Azure means rewriting storage code), No portability (shares from one ACI cannot be consumed by another without bespoke integration), and No integrity guarantees (corrupted shares are discovered only at reconstruction time, when it is too late).
Use Cases
Store shares across AWS, Azure, and GCP. No single cloud provider sees your data. Compromised cloud account reveals only cryptographic noise.
Cloud AgnosticEU data stays in EU backends. US data stays in US backends. Satisfy GDPR, CCPA, and data residency with one storage layer.
Data ResidencyOffline filesystem backends for classified data. Shares never touch a network. Combine with online backends for hybrid resilience.
Air-GappedSplit proprietary datasets across independent backends to prevent database exfiltration. No single backend breach yields usable data.
IP ProtectionArchitecture
Five-stage pipeline: pad, authenticate, split, route, store. Every stage independently verifiable. Manifest ties it all together.
Pluggable Backend Interface
The StorageBackend interface defines three methods: put, get, and delete. Any storage system implementing this interface can serve as a share backend. Zero application-layer changes to swap backends.
interface StorageBackend { readonly id: string; put(key: string, data: Uint8Array): Promise<Result<void>>; get(key: string): Promise<Result<Uint8Array>>; delete(key: string): Promise<Result<void>>; }
ACI Interface
Complete Flow: Multi-Cloud 2-of-3 Storage
This example demonstrates xStore's core value: one interface, multi-backend orchestration, information-theoretic security.
import { createStore } from '@private.me/xstore';
// Configure three cloud backends
const s3 = new S3Backend({ region: 'us-east-1', bucket: 'shares' });
const azure = new AzureBlobBackend({ account: 'eu-storage', container: 'shares' });
const gcp = new GCPStorageBackend({ project: 'xstore', bucket: 'apac-shares' });
// Universal split-storage: 2-of-3 threshold
const store = createStore({
backends: [s3, azure, gcp],
threshold: { k: 2, n: 3 },
});
// Store 10MB dataset
const data = new Uint8Array(1024 * 1024 * 10);
const putResult = await store.put('patient-genome', data, {
onProgress: (event) => {
console.log(`Backend ${event.backendId}: ${event.status} | ${event.bytesTransferred}B`);
}
});
const manifest = putResult.value;
// Later: retrieve from any 2 of 3 backends
const getResult = await store.get(manifest);
// getResult.value is byte-identical to data, HMAC-verified
Security Properties
xStore inherits XorIDA's information-theoretic guarantees and adds storage-layer integrity, manifest authentication, and backend isolation.
| Property | Mechanism | Guarantee |
|---|---|---|
| Confidentiality | XorIDA k-of-n splitting | Any k-1 shares reveal zero information (information-theoretic) |
| Integrity (share) | HMAC-SHA256 per share | Tampered shares rejected before reconstruction |
| Integrity (payload) | HMAC-SHA256 over padded data | Whole-payload verification after reconstruction |
| Backend Isolation | One share per backend | Compromising one backend yields only noise |
| TTL Expiry | Manifest timestamp + TTL | Expired manifests rejected; backends can garbage-collect |
vs. Traditional Encryption
| Dimension | Encrypted Blob Storage | xStore Split-Storage |
|---|---|---|
| Key Management | Requires secure key storage and rotation | No keys — the split IS the security |
| Single Point of Trust | One key compromise = full plaintext | Must compromise k-of-n independent backends |
| Quantum Resistance | AES vulnerable to Grover's algorithm | Information-theoretic — unconditionally secure |
| Backend Portability | Re-encrypt on migration | Swap backend, keep shares |
Benchmarks
Performance characteristics measured on Node.js 22, Apple M2. xStore adds minimal overhead while eliminating single-point-of-compromise risk.
| Operation | Time | Notes |
|---|---|---|
| XorIDA split (1 KB) | ~58µs | 2-of-2 threshold over GF(2) |
| HMAC-SHA256 per share | <0.1ms | Integrity verification before routing |
| Route to local filesystem | <1ms | Direct write |
| Route to S3 | ~50–200ms | Network-dependent, regional |
| HMAC verify + reconstruct | <1ms | Verify before XOR reconstruction |
| Full pipeline (local) | ~1ms | Split → HMAC → route → verify |
| Full pipeline (cloud) | ~200–400ms | Dominated by network round-trip |
Honest Limitations
Five known limitations documented transparently. xStore trades storage efficiency for security guarantees.
| Limitation | Impact | Mitigation |
|---|---|---|
| Network latency for remote backends | Cloud backends add 50–200ms per operation. Slowest backend determines overall latency. | Parallel retrieval minimizes impact. Local caching of frequently-accessed shares. Use local backends for latency-critical workloads. |
| No built-in replication | xStore routes shares but does not replicate within a backend. If a backend loses data, the share is gone. | Cloud backends (S3, Azure) provide their own replication. For local backends, use filesystem-level RAID. 2-of-3 configurations provide share-level redundancy. |
| Custom adapter required per backend | Each new storage backend requires implementing the StorageBackend interface. | The interface is minimal (put/get/delete). Reference adapters serve as templates. Community adapters can be contributed. |
| No streaming for large files | Entire file must fit in memory for XorIDA splitting. Files larger than RAM cannot be processed. | Chunk large files before splitting. xStore can store each chunk independently with sequential IDs. Reassemble after reconstruction. |
| No cross-backend atomic writes | xStore does not guarantee atomic writes across backends. A failure mid-write can leave shares in an inconsistent state. | HMAC verification detects incomplete writes. Implement write-ahead logging for critical data. Retrieval fails safely if any share is missing or corrupted. |
Cross-ACI Composition
xStore is the storage layer that other ACIs build on. Instead of each product reinventing share persistence, they delegate to xStore and focus on domain logic.
Get Started
Install xStore, implement a backend, and start storing threshold-split data in minutes.
npm install @private.me/xstore
import { createStore } from '@private.me/xstore'; import { S3Backend, AzureBackend, FsBackend } from './backends'; const store = createStore({ backends: [ new S3Backend({ bucket: 'shares-us-east' }), new AzureBackend({ container: 'shares-eu-west' }), new FsBackend({ path: '/mnt/airgap/shares' }), ], threshold: { k: 2, n: 3 }, }); const data = new TextEncoder().encode('patient record'); const { value: manifest } = await store.put('record-123', data); const { value: restored } = await store.get(manifest); // restored is byte-identical to data
Enterprise Deployment & Verification
Verifiable code execution, enterprise CLI, and proof-based auditing for regulated industries.
Verifiable Split-Storage
Every xStore operation produces integrity artifacts that xProve can chain into a verifiable audit trail. Prove correct storage without revealing the data itself.
Enterprise CLI
Self-hosted split-storage server. Deploy xStore on your own infrastructure with Docker-ready, air-gapped capable deployment.
xStore Universal Split-Storage
Talk to Sol, our AI platform engineer, or book a live demo with our team.
Deployment Options
SaaS Recommended
Fully managed infrastructure. Call our REST API, we handle scaling, updates, and operations.
- Zero infrastructure setup
- Automatic updates
- 99.9% uptime SLA
- Pay per use
SDK Integration
integrate into your application for split-custody storage. Add xStore to your project to distribute data shares across multiple backends with XorIDA security.
npm install @private.me/xstore- TypeScript/JavaScript SDK
- Full source access
- Enterprise support available
On-Premise Enterprise
Self-hosted infrastructure for air-gapped, compliance, or data residency requirements.
- Complete data sovereignty
- Air-gap capable
- Docker + Kubernetes ready
- RBAC + audit logs included