Skip to main content

Distributed Compute

Coming Q1 2026 — Distributed Compute is currently in development. This documentation is a design preview. Request early access to be notified when it’s available.

Overview

Distributed Compute enables AI training and inference across hybrid Earth-space infrastructure. Coordinate federated learning, partition models optimally, and synchronize through bandwidth-constrained orbital links.

Why Earth-Space Distributed Compute?

Large AI models don’t fit on any single node. Training and inference must span infrastructure. But space introduces unique constraints:
ChallengeSolution
Bandwidth is scarce (limited ground passes)100x gradient compression with TopK + quantization
Latency varies wildly (5ms to 500ms+)Async aggregation and intelligent model partitioning
Connectivity is intermittentPriority-based sync scheduling across passes
Topology is dynamicISL mesh routing adapts to orbital geometry

Architecture

Your training job connects to RotaStellar Distributed Compute, which coordinates workloads across ground and orbital infrastructure:
1

Distributed Compute Layer

The core coordination layer includes Federated Learning (gradient compression and aggregation), Model Partitioning (optimal layer placement), and Sync Scheduler (ground pass planning).
2

Space Mesh

Inter-Satellite Link (ISL) routing enables orbital nodes to communicate with each other and relay data to ground stations.
3

Infrastructure

Ground Nodes provide high-bandwidth terrestrial compute. LEO Nodes run solar-powered orbital compute, connected via ISL and synchronized during ground passes.

Key Capabilities

Gradient Compression

Reduce bandwidth by 100x with minimal accuracy loss:
from rotastellar_distributed import CompressionConfig

compression = CompressionConfig(
    method="topk_quantized",
    k_ratio=0.01,           # Keep top 1% of gradients
    quantization_bits=8,    # 8-bit quantization
    error_feedback=True     # Accumulate compression error
)
# 4.2 MB gradient → 42 KB compressed
# Under 0.5% accuracy loss

Async Aggregation

Handle intermittent connectivity with async federated averaging:
  • Nodes train independently during eclipse/no-contact periods
  • Gradients sync during ground station passes
  • Central aggregator handles out-of-order updates
  • Convergence guaranteed despite variable latency

Intelligent Partitioning

Split models optimally across Earth and orbital nodes:
  • Minimize data transfer at cut points
  • Account for per-node compute capacity
  • Adapt to changing orbital geometry
  • Balance latency vs throughput

Quick Start

from rotastellar_distributed import FederatedClient, CompressionConfig

# Configure compression
compression = CompressionConfig(
    method="topk_quantized",
    k_ratio=0.01,
    quantization_bits=8
)

# Initialize federated client
client = FederatedClient(
    api_key="rs_...",
    node_id="orbital-3",
    node_type="orbital",
    compression=compression
)

# Train locally
gradients = client.train_step(model, batch)

# Sync during ground pass
client.sync(gradients, priority="high")

Performance

MetricValue
Gradient compression100x (4.2 MB → 42 KB)
Accuracy lossUnder 0.5% vs uncompressed
Sync efficiency+45% bandwidth utilization
Training overhead+15-20% time vs centralized
Energy savings35-45% vs terrestrial-only

Timeline

MilestoneTarget
Design preview (this doc)Now
SDK with simulatorsQ1 2026
Beta with partnersQ2 2026
General availabilityQ3 2026

Get Notified

Request Early Access

Be the first to know when Distributed Compute is available.