Distributed Compute
Overview
Distributed Compute enables AI training and inference across hybrid Earth-space infrastructure. Coordinate federated learning, partition models optimally, and synchronize through bandwidth-constrained orbital links.Federated Learning
Train models across Earth and orbital nodes with gradient compression
Model Partitioning
Optimal layer placement across Earth and space infrastructure
Sync Scheduler
Ground station pass planning and priority-based queuing
Space Mesh
ISL routing for orbital node communication
Why Earth-Space Distributed Compute?
Large AI models don’t fit on any single node. Training and inference must span infrastructure. But space introduces unique constraints:| Challenge | Solution |
|---|---|
| Bandwidth is scarce (limited ground passes) | 100x gradient compression with TopK + quantization |
| Latency varies wildly (5ms to 500ms+) | Async aggregation and intelligent model partitioning |
| Connectivity is intermittent | Priority-based sync scheduling across passes |
| Topology is dynamic | ISL mesh routing adapts to orbital geometry |
Architecture
Your training job connects to RotaStellar Distributed Compute, which coordinates workloads across ground and orbital infrastructure:1
Distributed Compute Layer
The core coordination layer includes Federated Learning (gradient compression and aggregation), Model Partitioning (optimal layer placement), and Sync Scheduler (ground pass planning).
2
Space Mesh
Inter-Satellite Link (ISL) routing enables orbital nodes to communicate with each other and relay data to ground stations.
3
Infrastructure
Ground Nodes provide high-bandwidth terrestrial compute. LEO Nodes run solar-powered orbital compute, connected via ISL and synchronized during ground passes.
Key Capabilities
Gradient Compression
Reduce bandwidth by 100x with minimal accuracy loss:Async Aggregation
Handle intermittent connectivity with async federated averaging:- Nodes train independently during eclipse/no-contact periods
- Gradients sync during ground station passes
- Central aggregator handles out-of-order updates
- Convergence guaranteed despite variable latency
Intelligent Partitioning
Split models optimally across Earth and orbital nodes:- Minimize data transfer at cut points
- Account for per-node compute capacity
- Adapt to changing orbital geometry
- Balance latency vs throughput
Quick Start
Performance
| Metric | Value |
|---|---|
| Gradient compression | 100x (4.2 MB → 42 KB) |
| Accuracy loss | Under 0.5% vs uncompressed |
| Sync efficiency | +45% bandwidth utilization |
| Training overhead | +15-20% time vs centralized |
| Energy savings | 35-45% vs terrestrial-only |
Timeline
| Milestone | Target |
|---|---|
| Design preview (this doc) | Now |
| SDK with simulators | Q1 2026 |
| Beta with partners | Q2 2026 |
| General availability | Q3 2026 |
Get Notified
Request Early Access
Be the first to know when Distributed Compute is available.

