Train models, run fine-tuning jobs, and execute GPU workloads on a distributed network of partner hardware — without managing infrastructure.
01
Specify a Docker image, a command, and the minimum VRAM you need. Submit via the dashboard or CLI.
02
The scheduler finds an approved partner node with available capacity and dispatches your job automatically.
03
Your container runs in a hardened, isolated environment. Stdout and stderr stream back to you in real time.
Register your machine as a partner node. Earn a share of every job that runs on your hardware. Our setup wizard handles Docker hardening, VPN tunneling, and job dispatch automatically.