Start Here
New here? Read these five first.
A deliberate path from zero to making your first images. Roughly 60 minutes total. Free.
Stable Diffusion in the Cloud: No GPU Required
Don't have a powerful GPU? Here's how to run Stable Diffusion using free and paid cloud services — generate…
Seeds, Samplers, and CFG: The Settings That Actually Matter
What the generation settings in Stable Diffusion actually do — explained with no jargon so you can stop…
Checkpoints vs LoRAs vs Embeddings: What They Are and When to Use Each
The three types of models you'll use in Stable Diffusion — what they do, how they're different, and when to…
The Secret: I Don't Start With a Prompt
Everyone asks why they can't get the same results with my prompts. The answer: the prompt you see is the last…
The Three Starting Points: How I Decide Where to Begin
Every image starts somewhere different. A reference, extracted tags, or someone else's prompt. Here's how I…
Learn
Master AI Image Generation
Production-Grade Blind Evaluation: Four Pipeline Gotchas That Will Bite You
You wrote a script that auto-generates images across N checkpoints and feeds them into a blind eval. It works once. It breaks the next time. The failure modes are subtle: filename gaps that shift every subsequent image's label by one; old prompt-dirs from yesterday's run leaking into today's; new checkpoints invisible because the API cached its model list at startup; non-realism checkpoints saturating to black on prompts with heavy double-parens. None of these announce themselves; you just get a result that's quietly wrong. Here are the four gotchas, exactly what each one does, and the specific code fix for each.
Bake Stability Diagnostics — When Your Recipe Won't Bake
You found a perfect LoRA recipe at runtime. Tournament-tested it across multiple rounds. Picked the winner. You bake it into the checkpoint and the output is neon nightmare at every CFG. Lowering CFG doesn't help. Lighter weights don't help. Fresh base doesn't help. You've burned half a day on a recipe that won't survive being baked. Here's the diagnostic batch I use when this happens — five controlled variant bakes running in parallel, each isolating a different cause. By the time the batch finishes, you know exactly what broke (and usually it's something you couldn't have predicted from runtime behavior).
Build A LoRA Triage UI (YES / MAYBE / NO For Hundreds Of LoRAs In One Sitting)
You've got 100+ LoRAs on disk and you genuinely don't know which ones earn their disk space. The default way to find out is to scroll through generation outputs in Finder and try to remember what you thought of each one. That doesn't scale. Here's a self-contained Python + HTML triage UI that pairs each test image with its no-LoRA baseline side-by-side, gives you three big YES/MAYBE/NO buttons, persists decisions in localStorage, and exports a markdown report. I ran through 120 LoRAs in under 15 minutes per pass with this.
Bake LoRAs Permanently Into a Checkpoint (And Stop Re-Typing Lighting Tags Forever)
You've got a LoRA recipe you copy-paste into every prompt. Three lighting LoRAs. Maybe a face LoRA. Maybe a style LoRA. Same weights every time. You've A/B-tested it to death and you're never going to ship without it again. So why are you still typing it? Here's the Python script I use to bake any LoRA recipe permanently into an SDXL checkpoint — 12 seconds per bake, fully reproducible, no extension required. Plus the gotchas I hit (skipped CLIP-G layers, fp32 vs fp16 weight drift, naming-mismatch silent failures) so you don't lose a day to them.
Blind Tournaments: How to Pick Merge Winners When You Can't Trust Your Eyes
You merged five candidate checkpoints and now you have to pick the best one. The tournament setup I use to take my own bias out of the loop: a Python tournament generator, an HTML evaluator with hidden recipes, and the methodology that lets me ship merges I actually trust.
Auto-Testing Every LoRA You Train (A1111 API Integration)
Queue three LoRAs, train overnight, wake up — do you actually want to manually test each one? The auto-test pipeline: when training finishes, a script talks to A1111's HTTP API, generates test grids automatically, and saves them. Full code and queue integration.
My LoRA Training Queue: How I Train Three LoRAs Overnight
Training one LoRA takes two hours. Training three takes six. The difference between "I trained three LoRAs this week" and "I trained three LoRAs last night" is a twenty-line JSON file and a single command.
Image Cropping for Consistent Output
The zoomer tool that handles the three things I need to do with every image: crop out the best part, expand borders without stretching, and change aspect ratios cleanly.
The Auto-Tagger Pipeline: Closing the Feedback Loop
Three different taggers, a metadata extractor, a master prompt aggregator, and how they all feed back into the generation pipeline to make every batch better than the last.
Batch Generation: From Recipe to 500 Images
How the batch generation scripts actually work — the recipe generator, the batch prompter, the LoRA novel system, checkpoint cycling, and how they produce hundreds of images while you do something else.