TRANSMISSION · WORKFLOW · DECRYPTING
workflow FREE

50,000 Variations From One Prompt

admin · Apr 2, 2026 · 2 views · 7 min read

One Prompt, Infinite Images

In the first tutorial, I showed you the pipeline. In the second, I showed you the three entry points and how I crop, tag, and enhance a reference image into a finished piece.

That flow works great when I want a specific image. But what about when I want hundreds?

This is my other main workflow — and it's how I produce the bulk of my daily output. Instead of carefully guiding one image through the pipeline, I set up a system that generates thousands of variations automatically and then I cherry-pick the best ones.

Same starting points. Same finish. Completely different middle.


The Concept: Prompt Recipes

Think of a recipe in a cookbook. It has ingredients — but some ingredients are interchangeable. You could use chicken or tofu. Basil or cilantro. The recipe stays the same, but every combination produces something different.

That's exactly how my prompt recipes work.

I take a prompt and break it into slots. Each slot represents one element of the image — the character, the pose, the outfit, the lighting, the composition. Then I give each slot multiple options.

The system randomly picks one option from each slot and combines them into a complete prompt. Different combination every time.

Here's where the math gets wild:

  • 4 body type options
  • 4 different builds
  • 34 character variations
  • 5 different checkpoints
  • Random LoRA mixing on top of all of that

4 × 4 × 34 × 5 = 2,720 unique combinations. And that's a simple recipe. Some of mine have slots with 10, 20, even 50 options. The total possibility space can hit 50,000+ unique images from a single recipe.

I don't generate all 50,000. I let it run for a few hours and produce 200-500 images, then I scroll through and pull out the ones that catch my eye.


How It Starts

The entry point is the same as everything else — I either:

  • Find a cool image somewhere and extract tags from it
  • Grab a prompt from Civitai or anywhere I see something interesting
  • Write something from scratch based on an idea in my head

Whichever route I take, I end up with a base prompt. Then I run it through my enhancement pass — the same lighting, composition, and color upgrades I showed in the last tutorial.

But this time, instead of generating directly from that enhanced prompt, I feed it into something different.


The Splitter

This is where it diverges from the reference image flow.

I take my enhanced prompt and run it through a splitter — a tool that breaks the prompt apart into individual pieces. Subject, pose, clothing, setting, lighting, composition — each piece becomes its own slot.

Then I go through the slots and make decisions:

  • Which slots stay fixed? If I want every variation to have the same lighting setup, I lock that slot to one option.
  • Which slots get variations? If I want different poses, I add multiple options to the pose slot. Different outfits? Multiple options in the clothing slot.
  • What extras do I add? Body types, character archetypes, specific details that aren't in the original prompt but would create interesting variations.

I also pick which checkpoints (AI models) to cycle through. Each checkpoint interprets the same prompt differently — different art styles, different levels of detail, different strengths. Running the same recipe through 5 checkpoints multiplies the variety.

And then there are LoRAs — small add-on models that push the output in specific directions. Character LoRAs, style LoRAs, quality LoRAs. I have a library of them, and the system randomly mixes different LoRAs into each generation. Some combinations produce unexpected magic.


The Generation

Once the recipe is set up, I hit go and let it run.

The system works through every combination automatically — picking a random option from each variable slot, selecting a checkpoint, mixing in LoRAs, and generating an image. Then it does it again with a different combination. And again. And again.

I usually let it run for 2-4 hours depending on how complex the recipe is. By the time it's done, I have a folder with 200-500 images.

Batch generation output — hundreds of variations from one recipe

Most of them are decent. Some are garbage. A few are incredible. That's the whole point — you cast a wide net and pull out the gems.


The Cherry-Pick

This is the part that surprises people: I scroll through all 200-500 images manually.

No algorithm. No automated scoring. I just open the folder and start scrolling. When something catches my eye — a composition that works, a lighting effect that hits, a character that has presence — I drag it to a winners folder.

Out of 300 images, I might pull 15-25 that I like. That's my selection rate: roughly 1 in 15.

The ones I don't pick aren't necessarily bad. They're just not the best version of that concept. When you've seen the best version, the rest feel flat by comparison.

Winners pulled from the batch — the ones that caught my eye


The Final Polish

The winners from the batch are good — but they're not finished. They came out of txt2img, which means they're the raw concept. Now they go through the same finishing process as everything else.

For each winner, I:

  • Re-enhance the prompt — based on what actually came out, I tweak the prompt. Maybe I want to push the lighting harder, or remove an element that showed up but doesn't work, or add something I see potential for.
  • Run it through img2img with all my checkpoints — same image, 5 different models, each adding their own interpretation.
  • Multiple passes — 3 to 7 img2img passes per model, each one refining the details, cleaning up artifacts, tightening the composition.
  • Cherry-pick again — from the img2img results, I pick the absolute best version.

Final high-res results after img2img passes

The jump from the raw batch pick to the final polished version is significant. img2img is what takes a good composition and turns it into a finished piece.


The Final Result

One prompt. One recipe. Hundreds of variations. A handful of winners. Polished through multiple img2img passes across multiple models.

The journey: from concept to finished piece

This is how I produce 100-500 images a day. Not by writing 500 prompts — by building one recipe that generates 500 variations, then filtering down to the ones worth keeping.


The Two Workflows

Now you've seen both of my main approaches:

Flow AThe reference image pipeline. Start from a specific image, crop it, tag it, enhance it, generate targeted variations, polish. Best for when I have a clear vision of what I want.

Flow B — The recipe system (this tutorial). Start from a concept, build a recipe with variation slots, generate hundreds automatically, cherry-pick, polish. Best for exploring possibilities and producing volume.

Most days I use both. I'll run a recipe in the background while I work on specific reference-based images. The recipe generates quantity. The reference pipeline generates precision.


Want the Full System?

This tutorial showed you what the recipe system does. The detailed breakdowns — how to build recipes, how the splitter works, how to set up checkpoint cycling and LoRA mixing, how to configure the generation scripts — those are available for Prompt Insider and Full Workshop members.

If you're generating images one at a time and wondering how some people produce hundreds a day, this is the answer. It's not about being faster. It's about building a system that generates for you.