chisel lets you profile your inference workloads on cloud GPUs without any infrastructure setup.
Install
Create your script
Run on GPU
Type | GPUs | Memory | Best For |
---|---|---|---|
GPUType.A100_80GB_1 | 1x A100 | 80GB | Development, inference |
GPUType.A100_80GB_2 | 2x A100 | 160GB | Medium training |
GPUType.A100_80GB_4 | 4x A100 | 320GB | Large models |
GPUType.A100_80GB_8 | 8x A100 | 640GB | Massive models |