gpuScheduler
Now in early access

GPU compute,
on demand

Train models, run fine-tuning jobs, and execute GPU workloads on a distributed network of partner hardware — without managing infrastructure.

How it works

01

Submit a job

Specify a Docker image, a command, and the minimum VRAM you need. Submit via the dashboard or CLI.

02

Matched to a node

The scheduler finds an approved partner node with available capacity and dispatches your job automatically.

03

Stream results

Your container runs in a hardened, isolated environment. Stdout and stderr stream back to you in real time.

Have idle GPUs?

Register your machine as a partner node. Earn a share of every job that runs on your hardware. Our setup wizard handles Docker hardening, VPN tunneling, and job dispatch automatically.

Become a partner