Loading...
xail Patent Portfolio / Application 4
Sign out ← Back
PATENT PORTFOLIO · APPLICATION 4

Distributed Multi-Processor Threshold Reconstruction and Secure Code Deployment Using Information-Theoretic Secret Sharing Over GF(2) with Heterogeneous TEE Isolation

Multi-processor TEE distribution, AI consensus rings, rotating firmware reconstruction, Double XorIDA dual-mode encoding, heterogeneous memory architecture defense, Xboot zero-plaintext server code deployment, and privacy-preserving computation on threshold shares (Xcompute).

20 Claims Filed 14 Independent 6 Dependent ~88 Method ~24 System ~6 Apparatus ~8 Media
20
claims filed
14
independent
6
dependent
GF(2)
field arithmetic
7
sections
20
figures
Filing Strategy: Claims are organized into filing groups. Group A (20 claims) files now. Groups B–G (106 claims) are reserved for 6 continuation applications using the same specification. All claims are supported by the current specification.
GROUP A · FILING NOW · 20 claims · 14 independent Core TEE distribution + memory defense
Section 1 · Claims 1-36 · Multi-Processor TEE Distribution

Multi-Processor Threshold Reconstruction Across Heterogeneous TEEs

Distribute shares across processors with distinct TEE implementations. Each processor computes GF(2) XOR partial results within its TEE. Final reconstruction merges partial results. Compromise of any single TEE yields zero information about plaintext.

CLAIM 1 Method Multi-processor threshold reconstruction across heterogeneous TEEs with GF(2) XOR Compromise of any single TEE yields zero information — partial results are information-theoretically random USE CASE: A hospital's patient records are split across Intel, Nvidia, and ARM processors. A DDR5 interposer attack yields one share — zero information. C21 XorIDA K=2, N=3 default configuration — any 2 of 3 processors sufficient for reconstruction
Claim 4 · Method

Vendor-Specific Hardware Vulnerability Protection

Distribute shares across TEEs from distinct manufacturers. A vulnerability in one vendor's silicon (Intel SGX, ARM TrustZone, Nvidia Confidential Computing) cannot compromise threshold secrecy. Multi-manufacturer isolation by construction.

CLAIM 4 Method Vendor-specific hardware vulnerability protection via multi-manufacturer TEE distribution Intel SGX + Nvidia CC + ARM TrustZone — vulnerability in one vendor = insufficient USE CASE: TEE.fail (Oct 2025) broke Intel SGX. With multi-vendor distribution, an SGX-only exploit yields one share — data remains secure on Nvidia and ARM processors. C24 Intel DDR5 + Nvidia HBM + ARM SRAM memory type diversity — each memory architecture requires independent attack vector
Claim 6 · Method

Multi-Device Geographic Distribution

Distribute shares across physically separated devices in different geographic locations. Reconstruction is location-independent — any k devices from any locations can reconstruct. Geographic separation provides physical security layer.

CLAIM 6 Method Multi-device geographic distribution with location-independent reconstruction Physical seizure of one location = one share. Geographic separation as security layer. USE CASE: A law firm distributes client privilege shares across New York, London, and Tokyo. A subpoena in one jurisdiction yields one share — insufficient for reconstruction. C26 TLS 1.3 with mutual authentication (mTLS) for inter-device share transport C27 Cross-vendor TEE at each geographic location — compound geographic + vendor isolation
Claim 10 · Method

Memory Bus Interposition Defense

Each processor holds one share in its TEE-protected memory. A physical interposer on any single memory bus captures one share — information-theoretically zero information about the secret. Defense is structural, not relying on encryption strength.

CLAIM 10 Method Memory bus interposition defense — single interposer yields zero information Structural defense: not encryption-dependent, information-theoretically secure against physical probing USE CASE: A state actor installs a DDR5 interposer on a server's memory bus. They capture one share in transit — mathematically equivalent to random noise. Zero information gained. C30 DDR5 + HBM + SRAM memory diversity — each memory type requires a physically distinct interposition technique
Claim 13 · Method

Streaming Chunk Reconstruction

Reconstruct data in streaming chunks — complete plaintext never exists in memory at once. Each chunk is reconstructed, processed, and discarded before the next chunk begins. Cold boot attacks capture at most one chunk.

CLAIM 13 Method Streaming chunk reconstruction — complete data never in memory at once Each chunk reconstructed, processed, discarded. Cold boot = at most one chunk. USE CASE: A 10GB database backup is reconstructed in 4KB chunks. A cold boot attack at any instant captures at most 4KB — 0.00004% of the data. C33 Chunk size = display viewport — only the visible portion of the document exists in decrypted memory at once
Claim 17 · Apparatus

FPGA XOR Gate Array for Wire-Speed Reconstruction

Field-programmable gate array with dedicated XOR gate array for threshold reconstruction at wire speed (>1 Gbit/s). Hardware-accelerated GF(2) operations bypass software overhead entirely. Suitable for datacenter-scale deployment.

CLAIM 17 Apparatus FPGA XOR gate array apparatus for wire-speed threshold reconstruction (>1 Gbit/s) Hardware-accelerated GF(2) operations — bypasses all software overhead USE CASE: A CDN edge node reconstructs threshold-protected video streams at wire speed. FPGA XOR gates process shares at line rate — zero software bottleneck. C35 Single SoC integration — FPGA XOR array, share buffers, and HMAC verification on one chip for minimal latency
Section 1 · Remaining Claims

All TEE Distribution Claims (C2-C36)

20 independent claims (C1-C20) covering the full spectrum of multi-processor TEE distribution, plus 16 dependent claims (C21-C36) adding configuration specifics.

Independent Claims (not shown above):

  • C2Heterogeneous TEE attestation verification — validate each processor's TEE before share distribution
  • C3Multi-processor fault tolerance — any k-of-n processors sufficient, graceful degradation when processors fail
  • C5Multi-jurisdiction legal protection — shares in different legal jurisdictions, no single subpoena suffices
  • C7Split-channel transport-to-hardware binding — each share arrives via distinct transport to distinct TEE
  • C8Secure display path — reconstructed data routed to trusted display, never touches general-purpose memory
  • C9Ephemeral viewing — reconstructed data displayed for bounded time, then securely zeroed
  • C11Distributed HMAC verification — each processor verifies its share's HMAC independently before partial computation
  • C12Multi-processor audit trail — each TEE logs verification and computation events
  • C14Dynamic share redistribution — re-share without full reconstruction when a processor is compromised
  • C15TEE version enforcement — minimum attestation version required before share delivery
  • C16Multi-processor key ceremony — threshold-shared master key for at-rest encryption of share stores
  • C18ASIC-optimized GF(2) reconstruction — application-specific integrated circuit for maximum throughput
  • C19Hardware random number generator integration — TRNG sourced entropy for share generation within TEE
  • C20Tamper-responsive enclosure — physical intrusion triggers share zeroization across all processors

Dependent Claims:

  • C21→ C1: XorIDA K=2, N=3 default configuration
  • C22→ C2: Remote attestation via manufacturer certificate chain
  • C23→ C3: Automatic failover with share re-routing to standby processor
  • C24→ C4: Intel DDR5 + Nvidia HBM + ARM SRAM memory types
  • C25→ C5: GDPR + CCPA + PDPA multi-jurisdiction mapping
  • C26→ C6: TLS 1.3 mTLS inter-device transport
  • C27→ C7: Cross-vendor TEE per geographic location
  • C28→ C8: GPU-direct display path bypassing CPU memory
  • C29→ C9: Configurable viewing window 5s-300s with forced zeroize
  • C30→ C10: DDR5 + HBM + SRAM memory diversity
  • C31→ C11: Parallel HMAC verification across all n processors simultaneously
  • C32→ C12: Tamper-evident log with hash chain per TEE
  • C33→ C13: Chunk size = display viewport
  • C34→ C14: Proactive share refresh on configurable schedule
  • C35→ C17: Single SoC integration (FPGA + buffers + HMAC)
  • C36→ C19: NIST SP 800-90B compliant entropy source
GROUP B · CONTINUATION 1 · 12 claims AI consensus ring
Section 2 · Claims 37-48 · AI Safety / Consensus Ring CONTINUATION 1

AI Model Output Verification via Threshold-Shared Signing Keys

Distribute a signing key across n AI model instances via threshold sharing. Each model computes its partial signature over the output. Valid signature requires k models to produce consistent output. A hallucinating or poisoned model cannot produce a valid signature alone.

CLAIM 37 Method AI model output verification via threshold-shared signing keys — consensus required A hallucinating or poisoned model cannot produce a valid signature alone USE CASE: Three AI models (Claude, GPT, Gemini) must agree before output is signed. A hallucinating model can't produce a valid signature alone. C43 Consensus criteria variants: exact match, semantic similarity threshold, factual consistency check, or confidence-weighted agreement
Section 2 · Remaining Claims CONTINUATION 1

All AI Safety / Consensus Ring Claims (C38-C48)

6 independent claims (C37-C42) for AI safety via threshold consensus, plus 6 dependent claims (C43-C48).

Independent Claims:

  • C38Prompt injection defense via instruction splitting — AI instructions split across n independent channels, reconstructed only at inference time
  • C39Model poisoning detection via cross-model divergence — threshold consensus detects when one model deviates from k-1 others
  • C40Correlated hallucination detection — independent model outputs compared, correlated errors flagged when agreement is suspiciously high on factual claims
  • C41Irreversible action protection — high-stakes actions (financial, medical, legal) require k-of-n model consensus before execution
  • C42AI consensus ring topology — models arranged in ring, each validates predecessor's output, rotating leader for fairness

Dependent Claims:

  • C43→ C37: Consensus criteria (exact/semantic/factual/confidence)
  • C44→ C38: Per-channel HMAC verification before instruction reconstruction
  • C45→ C39: Configurable divergence threshold per model pair
  • C46→ C40: Automatic quarantine of suspected hallucinating model
  • C47→ C41: Configurable action severity levels mapping to different k thresholds
  • C48→ C42: Byzantine fault tolerance — ring continues operating with up to f=(n-1)/3 compromised members
GROUP C · CONTINUATION 2 · 12 claims Rotating firmware reconstruction
Section 3 · Claims 49-60 · Firmware Ring Trust CONTINUATION 2

Rotating Firmware Reconstruction

Periodically reconstruct each chip's firmware from threshold shares held by other chips. A rootkit installed at any point is destroyed at the next reconstruction cycle. Persistence window is bounded by the rotation interval.

CLAIM 49 Method Rotating firmware reconstruction — periodic clean flash from threshold shares Rootkit persistence bounded by rotation interval. Phoenix Architecture: continuous rebirth. USE CASE: Every 10 minutes, each chip's firmware is reconstructed from shares held by other chips. A rootkit installed at minute 1 is destroyed at minute 10. C55 Configurable reprogram cycle from 1 minute to 24 hours — tradeoff between security and flash wear
Section 3 · Remaining Claims CONTINUATION 2

All Firmware Ring Trust Claims (C50-C60)

6 independent claims (C49-C54) for firmware integrity via threshold sharing, plus 6 dependent claims (C55-C60).

Independent Claims:

  • C50Bounded persistence window — maximum compromise duration guaranteed by reconstruction interval, no persistent rootkit possible
  • C51Cross-vendor firmware ring — each chip's firmware shares held by chips from different manufacturers
  • C52Formally verifiable bootloader — bootloader code small enough for formal verification, reconstructed from shares at each boot
  • C53GF(2)-enabled real-time reconstruction — XOR-only operations fast enough for continuous firmware refresh without service interruption
  • C54Firmware version attestation chain — each reconstruction includes version hash signed by the reconstructing ring members

Dependent Claims:

  • C55→ C49: Configurable reprogram cycle 1 minute to 24 hours
  • C56→ C50: Exponential backoff on reconstruction failure detection
  • C57→ C51: Minimum 3 distinct silicon vendors in ring
  • C58→ C52: Bootloader size cap <10KB for tractable formal verification
  • C59→ C53: Sub-millisecond reconstruction for firmware images up to 16MB
  • C60→ C54: Merkle tree hash of firmware image with root signed by ring majority
GROUP D · CONTINUATION 3 · 20 claims Double XorIDA + heterogeneous memory
Section 4 · Claims 61-78 · Dual-Mode / Double XorIDA CONTINUATION 3

Double XorIDA — Two-Pass GF(2) Encoding

First pass: threshold secret sharing (k-of-n secrecy). Second pass: erasure coding over the shares (fault tolerance). Result: simultaneous information-theoretic secrecy AND erasure resilience at 2.0x total overhead. Ordering is critical — secrecy pass must precede efficiency pass.

CLAIM 69 Method Double XorIDA — two-pass GF(2) encoding: secrecy + erasure resilience Simultaneous information-theoretic privacy + fault tolerance at 2.0x overhead USE CASE: Patient data is first split for secrecy (2-of-3), then those shares are erasure-coded (3-of-4). Result: quantum-proof privacy + fault tolerance at 2.0x overhead. C71 2.0x total storage overhead (proven bound) C72 Same FPGA hardware for both passes (GF(2)) C73 Pipelined: pass 2 begins before pass 1 completes C74 Security preserved through efficiency pass (proven) C75 Pass ordering is critical: secrecy before efficiency
Section 4 · Remaining Claims CONTINUATION 3

All Dual-Mode / Double XorIDA Claims (C61-C78)

7 independent claims covering security mode, efficiency mode, mode switching, and Double XorIDA, plus 11 dependent claims.

Independent Claims:

  • C61Security mode — GF(2) matrix configured for information-theoretic secrecy (k-of-n, fewer than k shares = zero information)
  • C62Efficiency mode — same GF(2) matrix configured for erasure resilience (any k-of-n shares reconstruct, optimized for availability)
  • C68Runtime mode switching — same hardware/software switches between security and efficiency modes via matrix parameter change
  • C70Double XorIDA system claim — system comprising security encoder, efficiency encoder, and reconstruction pipeline
  • C77Double XorIDA apparatus — hardware device implementing two-pass encoding with dedicated XOR gate arrays per pass
  • C78Double XorIDA computer-readable medium — non-transitory storage with instructions for two-pass encoding

Dependent Claims:

  • C63→ C61: Vandermonde matrix for optimal security mode distribution
  • C64→ C62: Cauchy matrix for optimal efficiency mode distribution
  • C65→ C61: Security mode with per-share HMAC-SHA256 integrity verification
  • C66→ C61: Security mode with minimum entropy verification on generated shares
  • C67→ C61: Security mode with configurable threshold k from 2 to n-1
  • C71→ C69: 2.0x total storage overhead (proven bound)
  • C72→ C69: Same FPGA hardware for both passes (GF(2) only)
  • C73→ C69: Pipelined execution — pass 2 begins before pass 1 completes
  • C74→ C69: Security preserved through efficiency pass (formal proof)
  • C75→ C69: Pass ordering critical — secrecy before efficiency
  • C76→ C70: System with automatic mode selection based on data classification policy
Section 5 · Claims 79-80 · Heterogeneous Memory CONTINUATION 3

Heterogeneous Memory Architecture System

System with Intel DDR5, Nvidia HBM, and ARM SRAM memory subsystems, each holding one threshold share. Each memory architecture requires a physically distinct attack vector. No single memory technology compromise is sufficient.

CLAIM 80 System Intel DDR5 + Nvidia HBM + ARM SRAM — each memory requires independent attack Three distinct memory architectures, three distinct physical attack surfaces, zero overlap USE CASE: A defense contractor stores classified data across DDR5, HBM, and SRAM. An attacker with a DDR5 interposer gets one share. HBM and SRAM require entirely different hardware attacks. C79 → C53: GF(2) real-time reconstruction optimized per memory type — DDR5 burst, HBM streaming, SRAM direct access patterns
GROUP E · CONTINUATION 4 · 12 claims Xboot server code deployment
Section 6 · Claims 81-92 · Xboot — Secure Code Deployment CONTINUATION 4

Xboot: Deploy Server Code as Threshold Shares

At deploy time, split server source code into threshold shares via GF(2) IDA. Distribute shares to n production servers, each storing only one share. At boot time, reconstruct code in volatile memory from k shares. Zero plaintext code on any disk, ever. Full root access on any server reveals zero source code.

CLAIM 81 Method Xboot: Deploy server code as threshold shares — zero plaintext on any server Boot-time reconstruction in volatile memory. Full root access = zero source code. USE CASE: Server source code is split into shares at deploy time. The production server stores only one share — an attacker with full root access sees zero code. C89 Base45 Xformat envelope for share encoding — IDA5 magic header + HMAC-SHA256 per share for integrity verification at boot
Section 6 · Remaining Claims CONTINUATION 4

All Xboot Secure Code Deployment Claims (C82-C92)

8 independent claims (C81-C88) for secure code deployment via threshold sharing, plus 4 dependent claims (C89-C92).

Independent Claims:

  • C82Boot-time reconstruction protocol — k servers exchange shares over mTLS, reconstruct in TEE-protected volatile memory, execute without touching disk
  • C83Signed deployment manifest — deployer signs manifest with Ed25519, includes code hash + share count + threshold + timestamp. Servers verify before reconstruction.
  • C84Rolling deployment via share rotation — update code by generating new shares, distribute incrementally, zero-downtime via staged reconstruction
  • C85Unified hardware + software threshold protection — code shares in TEE (hardware) + separate data shares via XorIDA (software), both required for operation
  • C86Xboot audit trail — every deployment, reconstruction, and code execution event logged with deployer DID + timestamp + code hash
  • C87Emergency code revocation — invalidate all shares by rotating the deployment manifest, preventing reconstruction of compromised code
  • C88Multi-tenant share isolation — multiple applications on same server, each with independent threshold shares, cross-tenant reconstruction impossible

Dependent Claims:

  • C89→ C81: Base45 Xformat envelope with IDA5 magic header + per-share HMAC-SHA256
  • C90→ C82: Reconstruction timeout — if k shares not received within configurable window, boot fails safely
  • C91→ C83: Manifest includes minimum TEE attestation version required for reconstruction
  • C92→ C85: Hardware threshold for code + software threshold for data + push-authenticated deployment approval (triple protection)
GROUP F · CONTINUATION 5 · 22 claims Xcompute core computation
Section 7 · Claims 93-126 (34 Claims, 8 Independent) · FIGs 17-20 · Computation on Threshold Shares CONTINUATION 5

Xcompute: Privacy-Preserving Computation on XorIDA Shares

Evaluate Boolean circuits directly on threshold shares without reconstruction. Plaintext NEVER exists at any compute node. Three computation classes: Class 1 (XOR-homomorphic, zero communication), Class 2 (AND gates via Beaver triples, 1 bit/party), Class 3 (hybrid). GF(2) operations deliver 64-256x lower communication cost than Shamir MPC.

FIG 17 System Architecture — Computation on Threshold Shares Data Owner A Splits data → 3 shares Data Owner B Splits data → 3 shares Data Owner C Splits data → 3 shares Compute Node 1 T2A conversion Local XOR (Class 1) Beaver AND (Class 2) Compute Node 2 T2A conversion Local XOR (Class 1) Beaver AND (Class 2) Compute Node 3 T2A conversion Local XOR (Class 1) Beaver AND (Class 2) Boolean Circuit XOR gates (free) + AND gates (1 round each) Result Shares (re-shared via A2T) HMAC-verified · No plaintext at any node
FIG 18 · Share Conversion Protocol CONTINUATION 5

Threshold-to-Additive (T2A) Share Conversion

Convert XorIDA threshold shares into additive shares suitable for Boolean circuit evaluation. Pure GF(2) operations — Gaussian elimination with random masking. No field conversion needed. Each party's computation is entirely local except for a single masked exchange.

FIG 18 Threshold-to-Additive (T2A) Conversion Protocol Step 1 · Local Each party: R_i = G_inv · S_i (matrix multiply) Pure GF(2) — XOR operations only Step 2 · Local Generate random mask M_i Keep A_i = M_i as additive share Step 3 · Exchange Send R_i XOR M_i to combiner (masked — random without M_i) Combiner sees only masked values — zero information about data Step 4 · Combine A_2 = XOR of all masked values → Result: M = A_1 XOR A_2 XOR A_3 Key advantage: pure XOR over GF(2). No finite field conversion. No modular arithmetic.
FIG 19 · AND Gate Protocol + Communication Cost CONTINUATION 5

Beaver Triple AND Gate Protocol Over GF(2)

Compute AND gates on secret-shared bits using pre-generated Beaver triples. Each triple share is 1 BIT in GF(2) (vs 64-256 bits in Shamir over GF(p)). One communication round per AND gate. XOR gates are completely free — no communication needed.

FIG 19 Beaver Triple AND Gate Protocol Over GF(2) Offline Phase Generate triple (a, b, c) where c = a AND b Each share = 1 BIT in GF(2) vs Shamir: 64-256 bits per share element Online Phase Broadcast d = x XOR a, e = y XOR b Cost: 2 bits per party total One round of communication per AND gate Compute z_i = c_i XOR (d AND b_i) XOR (a_i AND e) XOR (i==1 ? d AND e : 0) All operations in GF(2): XOR = addition, AND = multiplication — single-bit arithmetic FIG 20 · Communication Cost per AND Gate 2 bits GF(2) XorIDA 128 bits Shamir GF(p64) 512 bits Shamir GF(p256) ~30ms FHE (BFV) 64-256x lower cost vs Shamir MPC
GROUP G · CONTINUATION 6 · 12 claims Xcompute embodiments
Computation Embodiments · Spec ¶[0064]-[0067] CONTINUATION 6

Four Computation Embodiments

Privacy-preserving computation across four domains. End-to-end pipeline: Split → Distribute → T2A Convert → Boolean Circuit → Re-Share → Reconstruct. HMAC chain of custody ensures integrity from split through computation to result reconstruction.

Computation Classes:

  • Class 1XOR-homomorphic — zero communication. Each party computes locally: S_i(A) XOR S_i(B) = S_i(A XOR B). Completely FREE.
  • Class 2AND gates via Beaver triples — 1 communication round per AND gate. 1 bit per party per gate (vs 64-256 bits in Shamir GF(p)).
  • Class 3Hybrid circuits — Boolean circuits combining XOR (free) and AND (1 round each). Optimal for real-world computations.

Four Embodiments:

  • Emb. 1Fraud detection (spec ¶[0066]) — equality testing circuit. Match/no-match on split records across institutions. Records never leave their threshold-shared state.
  • Emb. 2Aggregate analytics (spec ¶[0067]) — sum, count, average on shared data without exposing individual records. Privacy-preserving ad attribution and statistics.
  • Emb. 3Credit scoring — comparison circuit produces approve/deny decision without revealing the actual credit score to any party.
  • Emb. 4Federated learning — gradient aggregation across institutions. Model improves without any institution revealing its training data.

HMAC Chain of Custody (spec ¶[0064]-[0065]):

Data owner splits → HMAC tag per share → compute nodes verify → T2A conversion → Boolean circuit evaluation → additive-to-threshold re-share → HMAC tag on result shares → reconstructor verifies. Input records NEVER exist in plaintext at any point in the pipeline.

HMAC Chain of Custody Split → HMAC → Verify → T2A → Circuit → A2T → HMAC → Verify → Reconstruct Integrity verified at every stage. Plaintext exists only at the original data owner and final reconstructor. Compute nodes process shares without ever seeing the underlying data.