Stable Diffusion on Mac: Your First Image in 20 Minutes
What You Need
- A Mac with Apple Silicon — M1, M2, M3, M4, or any variant (Pro, Max, Ultra). Intel Macs can technically run this but they're painfully slow. If you have an Intel Mac, the cloud setup guide is a better option.
- At least 16GB of RAM. 8GB works but you'll be limited to smaller images.
- At least 20GB of free disk space. Checkpoints are 2-7GB each.
Apple Silicon Macs use MPS (Metal Performance Shaders) instead of NVIDIA's CUDA. It's slower than a dedicated GPU, but it works and the quality is identical. I run my entire workflow on a MacBook.
Step 1: Install Homebrew
Open Terminal (search for it in Spotlight or find it in Applications → Utilities) and paste this:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Follow the prompts. When it's done, it'll tell you to run two commands to add Homebrew to your PATH — run those commands. They look something like:
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
Step 2: Install Python and Git
In Terminal:
brew install [email protected] git
Verify with:
python3 --version
You should see Python 3.10.x.
Step 3: Download Automatic1111
In Terminal, navigate to where you want to install it. Your home folder or Documents works:
cd ~/Documents
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
Step 4: Download a Checkpoint
A checkpoint is the AI model that generates images. You need at least one.
Here are good first checkpoints depending on what you want to make:
For anime/illustration:
- Anything V5 — classic anime style, very forgiving
- MeinaMix — clean anime with good anatomy
For realistic/photographic:
- Realistic Vision — the go-to for realistic portraits
- epiCRealism — natural-looking with good skin
For stylized/3D:
- DreamShaper — versatile, does everything
- RevAnimated — great for fantasy and 3D-style art
Download the .safetensors file from Civitai and move it to:
~/Documents/stable-diffusion-webui/models/Stable-diffusion/
Step 5: Launch It
In Terminal:
cd ~/Documents/stable-diffusion-webui
./webui.sh
The first launch takes a while — it's downloading dependencies and compiling things. Let it run. When you see:
Running on local URL: http://127.0.0.1:7860
Open that URL in your browser. You're in.
Note: On Mac, you might see a message about MPS being used. That's correct — it means your Apple Silicon GPU is doing the work. If you see warnings about MPS fallback for certain operations, that's normal too. The images will still generate fine.
Step 6: Your First Image
Select your checkpoint from the dropdown in the top-left. Click the refresh button if it doesn't appear.
Type a simple prompt:
a girl standing in a field of flowers, sunset, beautiful lighting
The two settings to play with:
- Steps: 20-30. Start with 20. It's faster and good enough to see what you're getting. Bump to 30 for more detail once you know your prompt is working.
- Width and Height. Start with 512x768 for portrait or 768x512 for landscape (SD 1.5 checkpoints). For SDXL checkpoints, use 832x1216 or 1216x832.
Leave everything else at defaults.
Click Generate.
On Apple Silicon, your first image will take 30-90 seconds depending on your chip and RAM. That's normal — it's slower than a dedicated NVIDIA GPU. Subsequent images will be faster because the model stays loaded in memory.
Step 7: Play
Experiment. Different prompts, different sizes, different step counts.
Things to know on Mac specifically:
- Generating is slower but quality is the same. A 30-step image on your Mac looks identical to one from a $2000 NVIDIA GPU. It just takes longer to cook.
- Size affects speed a lot. Bigger images = significantly more time. Start small, find prompts you like, then bump up the size for your favorites.
- Keep an eye on memory. If you have 16GB of RAM and you're running other apps, generation might slow down or fail on larger images. Close Chrome (it's eating your RAM).
- Don't close Terminal. The web UI runs from Terminal. If you close the Terminal window, the UI stops.
Don't worry about samplers, CFG, negative prompts, or any other settings yet. Just generate. Get a feel for it. The rest comes later.
What's Next?
Once you're generating comfortably:
- Why I don't start with a prompt — the mindset shift that changes everything
- The Three Starting Points — how to go from reference images to better prompts
- How I Remix Any Prompt — how to take someone else's prompt and make it your own
Welcome to the rabbit hole.