ComfyUI in the Cloud: No GPU Required
Why ComfyUI in the Cloud?
If you read the A1111 cloud guide, you already know the basics — rent a GPU online, run the UI in your browser. This guide covers the same thing for ComfyUI.
Why pick ComfyUI over A1111 in the cloud? ComfyUI uses less VRAM, which means you can rent cheaper GPUs or generate larger images on the same hardware. On cloud pricing, that adds up.
Option 1: RunPod (Recommended)
RunPod has pre-built ComfyUI templates. It's the easiest cloud option.
- Create a RunPod account
- Go to Templates and search for "ComfyUI"
- Pick a GPU — RTX 3090 ($0.40/hr) or RTX 4090 ($0.75/hr)
- Deploy the pod
- Click Connect → open the ComfyUI web UI
You're in. The default workflow is loaded. Pick a checkpoint, type a prompt, generate.
Tip: RunPod templates usually come with a checkpoint pre-installed. If not, use the terminal to download one:
cd /workspace/ComfyUI/models/checkpoints
wget https://civitai.com/api/download/models/XXXXX -O model.safetensors
Replace the URL with the actual download link from Civitai (right-click the download button → copy link).
Option 2: Google Colab
Search GitHub for "ComfyUI Colab notebook" — several maintained notebooks exist. The process:
- Open the notebook in Colab
- Set runtime to GPU
- Run all cells
- Click the Gradio/tunnel link when it appears
Same caveats as the A1111 cloud guide: free Colab has time limits and Google sometimes restricts SD usage.
Option 3: Paperspace
Paperspace works well for ComfyUI because of persistent storage — your workflows and models stay between sessions.
- Create a Notebook with a GPU
- Clone ComfyUI in the terminal:
git clone https://github.com/comfyanonymous/ComfyUI.git cd ComfyUI pip install -r requirements.txt python main.py --listen 0.0.0.0 - Access via the provided URL
Option 4: ThinkDiffusion
ThinkDiffusion supports ComfyUI as well as A1111. Select ComfyUI when creating your session — it's pre-installed with everything configured.
Which One?
| Situation | Best Option |
|---|---|
| Fastest setup, pay per hour | RunPod |
| Free, limited | Google Colab |
| Persistent storage | Paperspace |
| Zero setup | ThinkDiffusion |
Once You're In
The ComfyUI interface is the same regardless of where it's running. The default workflow has everything wired:
- Load Checkpoint — select your model
- CLIP Text Encode (positive) — type your prompt:
a girl standing in a field of flowers, sunset, beautiful lighting - Empty Latent Image — set size (512x768 for SD 1.5, 832x1216 for SDXL)
- Click Queue Prompt
Two settings to play with early: steps (20-30 in the KSampler node) and image size (in the Empty Latent Image node). Leave everything else at defaults.
What's Next?
Once you're generating:
- Why I don't start with a prompt — the mindset shift
- The Three Starting Points — reference images to better prompts
- How I Remix Any Prompt — remixing existing prompts
Welcome to the rabbit hole.