Start Here
New here? Read these five first.
A deliberate path from zero to making your first images. Roughly 60 minutes total. Free.
Stable Diffusion in the Cloud: No GPU Required
Don't have a powerful GPU? Here's how to run Stable Diffusion using free and paid cloud services — generate…
Seeds, Samplers, and CFG: The Settings That Actually Matter
What the generation settings in Stable Diffusion actually do — explained with no jargon so you can stop…
Checkpoints vs LoRAs vs Embeddings: What They Are and When to Use Each
The three types of models you'll use in Stable Diffusion — what they do, how they're different, and when to…
The Secret: I Don't Start With a Prompt
Everyone asks why they can't get the same results with my prompts. The answer: the prompt you see is the last…
The Three Starting Points: How I Decide Where to Begin
Every image starts somewhere different. A reference, extracted tags, or someone else's prompt. Here's how I…
Learn
Master AI Image Generation
Production-Grade Blind Evaluation: Four Pipeline Gotchas That Will Bite You
You wrote a script that auto-generates images across N checkpoints and feeds them into a blind eval. It works once. It breaks the next time. The failure modes are subtle: filename gaps that shift every subsequent image's label by one; old prompt-dirs from yesterday's run leaking into today's; new checkpoints invisible because the API cached its model list at startup; non-realism checkpoints saturating to black on prompts with heavy double-parens. None of these announce themselves; you just get a result that's quietly wrong. Here are the four gotchas, exactly what each one does, and the specific code fix for each.
The 10% Accent Rule: Composites That Beat Their Ingredients
You ran a graft-comparison round at 30%. One candidate placed surprisingly high in a small early eval, then collapsed when you verified with more prompts — but the model has a real visual character you don't want to lose. Most people drop it and pick from the remaining survivors. The better move: keep it as a 10% accent on top of the survivors. The composite usually beats every ingredient including itself at 30%. Here's the rule, when it applies, and why a primary-secondary-accent split at roughly 70/20/10 is the structure that works.
Two Hard Rules For Blind Evals: 5-Prompt Floor And Always-Control
You ran a blind eval, picked a winner, almost shipped it — then a verification round flipped the result entirely. The candidate that won two of three prompts placed fourth across five. Three prompts felt like enough data; it was actually noise dressed up as signal. There are two specific design rules that prevent this failure: a hard floor on prompt count, and always including the previous version as a control. Cheap to apply, painful to ignore. Here's what each one buys you and the exact thresholds I use now.
Weighted Scoring — When Your 3/2/1 Tournament Hides The Real Winner
Your blind eval came back with two models tied at the top. 21 points each across 10 prompts under standard 3 / 2 / 1 top-3 ranking. Looks like a coin flip. It probably isn't. The standard scoring scheme treats 'never bombs' and 'wins more often' as equivalent — but for production model selection, those are very different qualities. Here's how to re-score the same data under different weighting schemes to surface the real preference, why ties under standard scoring often resolve cleanly when you reweight, and how to pick a scoring scheme that matches what you'll actually do with the result.
Why Baked LoRAs Behave Differently Than Runtime LoRAs
You tested a LoRA stack at runtime — included it in the prompt at specific weights — and the output was great. You baked the same stack into the model at the same weights, expecting the same output. Instead you got neon nightmare, blown-out colors, or just a noticeably weaker version of what worked at runtime. Same weights, same LoRAs, same base model. Why does the bake behave differently? Three reasons that compound: CFG amplification math, fp16 precision drift, and sequential layering effects. Understanding each tells you why some recipes will never bake, no matter how much you tune.
Bake Stability Diagnostics — When Your Recipe Won't Bake
You found a perfect LoRA recipe at runtime. Tournament-tested it across multiple rounds. Picked the winner. You bake it into the checkpoint and the output is neon nightmare at every CFG. Lowering CFG doesn't help. Lighter weights don't help. Fresh base doesn't help. You've burned half a day on a recipe that won't survive being baked. Here's the diagnostic batch I use when this happens — five controlled variant bakes running in parallel, each isolating a different cause. By the time the batch finishes, you know exactly what broke (and usually it's something you couldn't have predicted from runtime behavior).
Auditing LoRAs At Maximum-Safe Weights (See What They Really Do)
When you're triaging which LoRAs to keep on disk, testing each at its default 0.4-0.6 weight gives you a muted, ambiguous signal — 'did this actually do anything?' instead of 'what does this LoRA really want to do?' Bump every test LoRA to its category's maximum safe weight and you'll get a much sharper read on each one's character. Different LoRA categories have different safe ceilings — sliders go to 1.5, photoreal lighting tops out at 0.6, Pony-on-Illustrious crashes above 0.4. Here's the schema I use for max-safe weight per LoRA type, and why each.
Why Some LoRAs Survive Baking And Others Don't (Passive vs Trigger-Dependent)
You baked your favorite LoRA recipe into a checkpoint following all the right steps. The runtime version of the recipe produces beautiful images. The baked version produces something flatter, weaker, like the LoRAs are barely there. The math is right. The weights match. The fault isn't in your bake — it's in the type of LoRA you baked. Passive style LoRAs translate cleanly to baked weights; trigger-dependent LoRAs don't. Here's the distinction, why it matters, and how to know which kind you're baking before you find out the hard way.
Build A LoRA Triage UI (YES / MAYBE / NO For Hundreds Of LoRAs In One Sitting)
You've got 100+ LoRAs on disk and you genuinely don't know which ones earn their disk space. The default way to find out is to scroll through generation outputs in Finder and try to remember what you thought of each one. That doesn't scale. Here's a self-contained Python + HTML triage UI that pairs each test image with its no-LoRA baseline side-by-side, gives you three big YES/MAYBE/NO buttons, persists decisions in localStorage, and exports a markdown report. I ran through 120 LoRAs in under 15 minutes per pass with this.
Downloading Big Checkpoints From Civitai (When Your Browser Won't)
You're trying to download a 6.5 GB checkpoint from Civitai. Chrome gets to 3 GB, the connection blips, and the download starts over from zero. You retry. Same thing. The browser is not built for big files over flaky CDNs. Here's the curl one-liner that auto-resumes from byte N on connection drop, finishes downloads the browser can't, and works on Mac, Linux, and WSL out of the box.
Multi-Round Merge Tournaments: Wide → Narrow → Dial-In
You ran a tournament with five candidate merges. Picked a winner. Shipped it. Two months later you wonder if the loser at slot 3 might have actually been better with slightly different weights — and you have no way to know without redoing everything. The fix is a multi-round tournament structure: wide net first, narrow on the winner's neighborhood, dial in along a single axis. Three rounds, ten or so total candidates, an answer you can defend. Here's how to design each round so the result is interpretable, not just a winner.
Diagnosing Dual-Purpose Ingredients In Your Merge (Why Your Anime Merge Keeps Drifting Toward 3D)
Your anime merge keeps coming out semireal. Your photoreal merge keeps drifting cinematic when you don't want cinematic. You've tried adjusting weights, you've tried swapping samplers, you've tried different prompts — nothing brings it cleanly back to where you want. The reason might not be the merge ratio. It might be that one of your ingredients is doing two jobs at once, and you only wanted one of them. Here's how to spot dual-purpose ingredients, why they're hard to dial out, and the fix that actually works.
V2 Or A New Model? How To Decide When To Add A Version On Civitai
You've got an updated checkpoint or LoRA ready to ship. Same family as something you've already published — but it's a meaningfully different output. Do you click "Add Version" on the existing model page, or post it as a new model? It sounds like a small decision but it's actually a strategic one. Here's the rule I use, when I break it, and what each path actually costs.
The Trigger Word Lie: Why The Tag List Under A LoRA Isn't What You Think It Is
You download a LoRA, hover over it in the WebUI, and see a list of "trigger words" the loader auto-extracted. You dutifully paste the top one into your prompt. Most of the time it does nothing. Sometimes it makes the output worse. Here's why that auto-extracted list isn't actually the trigger, how to find out what (if anything) the LoRA really wants you to type, and the rule of thumb for when triggers matter and when they're cosmetic.
Stabilizer LoRAs: How a Lighting LoRA Accidentally Fixed My Neon Nightmare
Your photoreal merge is mostly great but every dozen prompts it falls into oversaturated magenta-green hell at CFG 5+. You've tried lower CFG. You've tried different VAEs. You've tried leaner negatives. They all help a little, none of them fix it. The fix isn't a different sampler or a tighter prompt — it's a tiny dose of any well-trained LoRA baked into the merge. Here's why that works, what to bake, and how to know if you need it.
Bake LoRAs Permanently Into a Checkpoint (And Stop Re-Typing Lighting Tags Forever)
You've got a LoRA recipe you copy-paste into every prompt. Three lighting LoRAs. Maybe a face LoRA. Maybe a style LoRA. Same weights every time. You've A/B-tested it to death and you're never going to ship without it again. So why are you still typing it? Here's the Python script I use to bake any LoRA recipe permanently into an SDXL checkpoint — 12 seconds per bake, fully reproducible, no extension required. Plus the gotchas I hit (skipped CLIP-G layers, fp32 vs fp16 weight drift, naming-mismatch silent failures) so you don't lose a day to them.
Lean Negatives For Photoreal Merges (And When Heavy Stacks Backfire)
You copy-pasted the same 25-token negative across every checkpoint you own. It works on anime merges. Then you load a photoreal merge and get neon nightmare output. Why photoreal merges have less headroom for heavy negatives, what a lean negative looks like, and the rule I use to size negatives to the merge.
Blind Tournaments: How to Pick Merge Winners When You Can't Trust Your Eyes
You merged five candidate checkpoints and now you have to pick the best one. The tournament setup I use to take my own bias out of the loop: a Python tournament generator, an HTML evaluator with hidden recipes, and the methodology that lets me ship merges I actually trust.
Why You Can't Average VAE Tensors When Merging Across Model Families
You merge two Illustrious models 50/50 expecting the best of both. Instead you get a grey-green wash. Here is what is actually happening, why your merge tool probably has this bug, and the one-line fix that unlocks every cross-family merge you have been avoiding.
Auto-Testing Every LoRA You Train (A1111 API Integration)
Queue three LoRAs, train overnight, wake up — do you actually want to manually test each one? The auto-test pipeline: when training finishes, a script talks to A1111's HTTP API, generates test grids automatically, and saves them. Full code and queue integration.
Triangulation Training: Three Experiments for When You Don't Know What's Broken
You trained a LoRA, it didn't work, you tweaked something, retrained, still didn't work. Now you've burned six hours iterating blind. Here's the method I use to get out: three experiments that bracket the problem and tell you the answer no matter which one wins.
The LoRA Strength Grid: How to Actually Know If Your LoRA Works
You trained a LoRA. You load it up, generate one image, and... you can't tell if it's working. One generation can't answer the question. A strength grid can. Here's the script, the setup, and the diagnostic cases that tell you exactly what's wrong.
Character vs Style LoRA Captions (The Rule Reverses)
Style LoRA captions and character LoRA captions follow opposite rules. Nobody tells you this clearly, which is why people try to train character LoRAs with style-style captions (or vice versa) and end up with a LoRA that activates inconsistently. Here's the clean mental model for both.
My LoRA Training Queue: How I Train Three LoRAs Overnight
Training one LoRA takes two hours. Training three takes six. The difference between "I trained three LoRAs this week" and "I trained three LoRAs last night" is a twenty-line JSON file and a single command.
Why Your LoRAs Come Out Weak (And the Settings That Fix It)
You trained a LoRA. It ran for hours. You loaded it up, prompted with the trigger word, and... nothing. Or close to nothing. Here's exactly why that happens and the settings that actually make LoRAs hit.
Composition Control: Getting the Shot You Want
Camera angles, framing techniques, and resolution tricks that Stable Diffusion actually responds to — and how to stop getting the same centered front-facing shot every time.
Color and Lighting Control in Prompts
How to actually control lighting and color in Stable Diffusion — specific setups that work, color palette techniques, and why 'good lighting' means nothing to the AI.
My Complete Daily Workflow: Idea to Upload, Start to Finish
The capstone — every tool, every script, every decision point connected into a single walkthrough of how I produce 100-500 images a day from start to upload.
Image Cropping for Consistent Output
The zoomer tool that handles the three things I need to do with every image: crop out the best part, expand borders without stretching, and change aspect ratios cleanly.
Checkpoint Showdown: My Top Picks and the Realism Spectrum
How I think about checkpoints as a spectrum from realistic to cartoon — and why running the same prompt across that entire range is how I find the best version of every image.
My Top 10 Negative Prompt Tricks
The negative prompt I use on every image, why each part is there, and 10 techniques I've learned for controlling what Stable Diffusion doesn't put in your images.
Building a Prompt Library: Save Everything, Reuse Forever
How I built a reusable library of image prompts and enhancement prompts over years of testing — and how having that library makes every new idea faster to execute.
LoRA Stacking 101: How I Combine LoRAs for Better Images
How I use base LoRAs on every image, randomly cycle through a huge pool for discovery, and repurpose character LoRAs to create original characters that don't exist anywhere else.
The Auto-Tagger Pipeline: Closing the Feedback Loop
Three different taggers, a metadata extractor, a master prompt aggregator, and how they all feed back into the generation pipeline to make every batch better than the last.
Batch Generation: From Recipe to 500 Images
How the batch generation scripts actually work — the recipe generator, the batch prompter, the LoRA novel system, checkpoint cycling, and how they produce hundreds of images while you do something else.
How I Use the Prompt Toolkit to Upgrade Any Prompt
The enhancement meta-prompts I run on every image prompt before it touches the splitter — what each one does, when to use it, and the exact order that works.
The Prompt Recipe System: How I Generate Hundreds of Unique Images From One Idea
A deep dive into the JSON recipe system that powers my batch generation — the splitter, the library slots, the generator script, and how they all connect.
Seeds, Samplers, and CFG: The Settings That Actually Matter
What the generation settings in Stable Diffusion actually do — explained with no jargon so you can stop guessing and start controlling your output.
Checkpoints vs LoRAs vs Embeddings: What They Are and When to Use Each
The three types of models you'll use in Stable Diffusion — what they do, how they're different, and when to use each one.
ComfyUI in the Cloud: No GPU Required
Run ComfyUI without a GPU using cloud services. Node-based AI image generation from any computer, no hardware required.
ComfyUI on Mac: Your First Image in 15 Minutes
Install ComfyUI on a Mac with Apple Silicon, load a checkpoint, and generate your first image using the node-based workflow.
ComfyUI on Windows: Your First Image in 15 Minutes
Install ComfyUI on Windows, load a checkpoint, and generate your first AI image — the node-based alternative to Automatic1111.
Stable Diffusion in the Cloud: No GPU Required
Don't have a powerful GPU? Here's how to run Stable Diffusion using free and paid cloud services — generate images from any computer.
Stable Diffusion on Mac: Your First Image in 20 Minutes
How to install Automatic1111 on a Mac with Apple Silicon, download your first checkpoint, and start generating AI images using MPS.
Stable Diffusion on Windows: Your First Image in 20 Minutes
A step-by-step guide to installing Automatic1111 on Windows, downloading your first checkpoint, and generating your first AI image.
How I Remix Any Prompt Into Something New
You find a prompt on Civitai that catches your eye. Instead of copying it and hitting generate, here's how I break it apart, add hundreds of variations, and turn one prompt into something completely new.
My Complete Workflow: A Full Day of Image Generation
What a real day of generating 100-500 images actually looks like. The morning review, the setup, the generation, the selection, and the upload. Every step, in order.
50,000 Variations From One Prompt
One prompt. One recipe. 50,000 possible combinations. Here's how I generate hundreds of unique images from a single concept without writing a single prompt by hand.
The Three Starting Points: How I Decide Where to Begin
Every image starts somewhere different. A reference, extracted tags, or someone else's prompt. Here's how I decide which door to walk through — and what happens after I do.
The Secret: I Don't Start With a Prompt
Everyone asks why they can't get the same results with my prompts. The answer: the prompt you see is the last step, not the first.