TRANSMISSION · BEGINNER · DECRYPTING
beginner FREE

ComfyUI on Mac: Your First Image in 15 Minutes

admin · Apr 7, 2026 · 0 views · 4 min read

Why ComfyUI?

If you already read the Automatic1111 Mac guide, ComfyUI is the node-based alternative. Instead of a form with settings, you get a visual graph where you wire pieces together.

The upside on Mac specifically: ComfyUI uses less memory than A1111. If you have 8GB or 16GB of RAM, that matters. You can generate larger images or run more complex workflows without running out of memory.


What You Need

  • A Mac with Apple Silicon — M1, M2, M3, M4, or any Pro/Max/Ultra variant
  • At least 8GB of RAM (16GB+ recommended)
  • About 15GB of free disk space

Step 1: Install Homebrew (If You Haven't)

If you already installed Homebrew for the A1111 guide, skip this.

Open Terminal and paste:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Follow the prompts, then run the two commands it tells you to add Homebrew to your PATH.


Step 2: Install Python

brew install [email protected]

Verify:

python3 --version

Step 3: Download ComfyUI

cd ~/Documents
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
pip3 install -r requirements.txt

The pip3 install step downloads the dependencies. Takes a few minutes.


Step 4: Download a Checkpoint

Grab a model from Civitai:

For anime: Anything V5 or MeinaMix For realistic: Realistic Vision or epiCRealism For stylized: DreamShaper or RevAnimated

Move the .safetensors file to:

~/Documents/ComfyUI/models/checkpoints/

Step 5: Launch It

cd ~/Documents/ComfyUI
python3 main.py --force-fp16

The --force-fp16 flag is important on Mac — it tells ComfyUI to use half-precision, which works better with Apple Silicon's MPS backend and uses less memory.

When you see:

To see the GUI go to: http://127.0.0.1:8188

Open it in your browser.


Step 6: Your First Image

The default workflow is already loaded. Three things to do:

  1. Load Checkpoint node — click the dropdown, select your model
  2. CLIP Text Encode (positive) — type your prompt:
a girl standing in a field of flowers, sunset, beautiful lighting
  1. Empty Latent Image node — set the size. 512 x 768 for SD 1.5 portrait, 832 x 1216 for SDXL portrait.

Click Queue Prompt (or Ctrl+Enter).

On Apple Silicon, expect 30-90 seconds for your first image. It's slower than NVIDIA but the quality is identical.


Step 7: Play

Change prompts. Change sizes. Find the KSampler node and try different step counts (20-30 is the sweet spot).

Mac-specific notes:

  • Memory is your limit. If generation fails on large images, make the image smaller or close other apps. ComfyUI will tell you if it runs out of memory.
  • --force-fp16 is your friend. Always use it on Mac. Without it, ComfyUI defaults to fp32 which uses twice the memory for no visible quality improvement.
  • Don't close Terminal. ComfyUI runs from Terminal. Close the window, lose the UI.
  • Speed scales with your chip. M1 is workable. M1 Pro/Max is comfortable. M2/M3/M4 variants are legitimately fast.

The Node Graph in 30 Seconds

Left to right, the default workflow does:

  1. Load Checkpoint — loads the AI model
  2. CLIP Text Encode (positive) — your prompt
  3. CLIP Text Encode (negative) — what you don't want
  4. Empty Latent Image — canvas size
  5. KSampler — generates the image
  6. VAE Decode — converts to viewable image
  7. Save/Preview Image — shows the result

Every ComfyUI workflow is a fancier version of this chain.


What's Next?

Once you're generating:

Welcome to the rabbit hole.

×