How I Remix Any Prompt Into Something New
The Third Door
In Tutorial #1, I showed you that I never start with a prompt — I start with a reference image, extracted tags, or someone else's prompt and build from there.
Tutorial #2 covered the first two starting points: reference images and tag extraction.
This tutorial is the third door. Starting from a prompt you found somewhere and turning it into something completely different.
This is probably how most of you work right now — you find a prompt on Civitai, copy it, hit generate, and hope for the best. I do the same thing. The difference is what happens next.
Step 1: Grab the Prompt
I find prompts everywhere. Civitai mostly — when I'm scrolling through images and something catches my eye, I click through and look at the generation data. Every image on Civitai shows the full prompt, model, sampler, CFG, everything.
Sometimes I grab prompts from my own old generations. Sometimes I write one from scratch based on a concept in my head. Doesn't matter where it comes from. The point is: I never use it as-is.
The prompt I grab is just raw material. A starting point. What matters is what I do with it.
Step 2: The Splitter
The first thing I do is throw the prompt into my prompt splitter. This tool takes one long prompt and breaks every single tag and phrase into its own separate piece.
So if the prompt is:
1girl, blonde hair, blue eyes, standing in a field, white dress, simple background, looking at viewer, smile
The splitter breaks it into:
1girlblonde hairblue eyesstanding in a fieldwhite dresssimple backgroundlooking at viewersmile
Each piece becomes its own slot. Now I can see the prompt like a recipe — individual ingredients I can swap, remove, or multiply.
At this point I also load in all of my checkpoints and all of my LoRAs. The splitter knows about them and will cycle through them automatically during generation.
Step 3: Find the Boring Parts
This is where the creative work starts. I look at every slot and ask: "Is this interesting, or is this boring?"
Most prompts I find have boring parts. Not bad — just default. Safe choices that don't push the image anywhere exciting.
Here's what I look for:
Backgrounds. If it says simple background or white background or just nothing — that's boring. I'll replace it with 3-4 options:
underwater ancient city with bioluminescent coralstanding on a meteor flying through a nebularooftop of a cyberpunk megacity at golden hourovergrown temple ruins with shafts of light
Now instead of one flat background, the splitter will try all four. Already we've gone from 1 image to 4 completely different moods.
Colors. If she has blonde hair, I'll add variations:
platinum silver hairdark crimson hairseafoam green hairash lavender hair
Same thing for eyes, same thing for clothing. Anything that has a color gets 3-5 options.
Outfits. If it says white dress — fine, but let's also try:
black tactical bodysuitflowing ceremonial kimonoarmored knight with capecasual streetwear hoodie and shorts
Poses. If it says standing or looking at viewer — those are fine but I'll add:
sitting cross-leggedleaning against a wallmid-jump action posefrom behind, looking over shoulder
The stuff I keep. Not everything gets variations. If the prompt has a specific character concept I like — say, horns or elf ears or a specific weapon — I leave that alone. That's the core identity. I'm changing the surrounding details, not the concept.
Step 4: The Low-Res Blast
Now I have a prompt with maybe 4 background options, 4 hair colors, 5 outfit styles, 4 poses, 5 checkpoints, and a bunch of LoRAs mixed in randomly. The math explodes fast — that's potentially thousands of unique combinations.
I run this at low steps and slightly smaller images. Not full quality. I'm not trying to make finished art yet — I'm scouting. Seeing what the combinations look like. Finding the direction.
The splitter fires off hundreds of images. Every combination it can hit in the time I give it.
Step 5: Read the Patterns
This is the part nobody talks about. You don't just generate and pick. You generate, look, and steer.
After the first 100-200 images come through, I start seeing patterns:
- "The green hair looks terrible with every checkpoint. Cut it."
- "The cyberpunk background works way better than the temple. Keep it, drop the others."
- "The kimono outfit is producing the best compositions. Double down."
- "That one LoRA is overpowering everything. Lower the weight or remove it."
So I go back into the splitter and edit mid-run. Remove the things that aren't working. Add new variations based on what IS working. Maybe I saw a color combination I didn't expect and now I want to explore that direction more.
This back-and-forth is the actual creative process. The AI generates, I react, I adjust, it generates again. It's a conversation.
After about 500 images total across a couple rounds of adjustments, the direction is locked in. I know what works.
Step 6: Pick the Favorites
Now I scroll through everything and pull out the winners. Out of 500 low-res images, I'm looking for maybe 20-30 that have something — good composition, interesting expression, a color palette that pops, a pose that feels natural.
These don't need to be perfect. They're low-res scouts, remember. I'm picking based on potential, not finish quality.
Step 7: The img2img Spiral
This is where it gets obsessive. I take my favorites and run them through img2img — feeding the image back into the AI with an enhanced prompt to add detail, sharpen features, improve lighting.
But I don't just do one pass. I do rounds.
Round 1: Take my 20-30 favorites, run img2img across multiple checkpoints. That gives me a few hundred new versions. Pick the best from those — maybe 10-15 survivors.
Round 2: Take those survivors, tweak the prompt slightly, run img2img again. Another hundred or so images. Pick the best again — maybe 8-10.
Round 3: Same thing. More refinement. Fewer images each round, but higher quality. By now I'm down to maybe 5-6 that I'm really excited about.
Round 4 (maybe): Sometimes I do one more pass if I'm close to something great. Sometimes I stop at round 3.
Each round is a funnel. Hundreds become dozens become a handful. Every round adds more detail, more polish, more of my specific vision layered on top.
Step 8: Done (or Done Enough)
Eventually one of two things happens:
-
I land on a few images I love. They go to Civitai with full metadata. They go to otakushowcase. Maybe the best one goes to the favorites folder as future reference material.
-
I get tired of this concept. Not frustrated — just ready to move on. I've been looking at variations of the same character for hours. I save the best of what I have, post them, and tomorrow I start fresh with a completely different prompt.
Both are fine. The point isn't to force a masterpiece out of every session. The point is to explore a direction, find the best it has to offer, and move on.
What Changed From the Original Prompt
Let's say I started with that simple Civitai prompt:
1girl, blonde hair, blue eyes, standing in a field, white dress, simple background, looking at viewer, smile
After my process, the final image might be:
- A dark crimson-haired girl in a flowing ceremonial kimono
- Standing on a rooftop overlooking a neon-lit cyberpunk city at sunset
- Shot from a low angle with dramatic rim lighting
- Run through 4 rounds of img2img across 3 different checkpoints
- With LoRAs adding specific skin texture, fabric detail, and atmospheric haze
Nothing is left from the original except the core concept of "a girl." The prompt was just the spark. Everything else came from the splitting, the variation slots, the pattern-reading, and the img2img refinement.
That's why copying someone's prompt doesn't give you their results. The prompt is step 1 of a 500-image journey.
The Three Doors Are All the Same Hallway
Here's the thing I want you to notice: this process is almost identical to the other two starting points.
- Starting from a reference image → crop → tag → enhance → split → generate → pick → img2img
- Starting from extracted tags → enhance → split → generate → pick → img2img
- Starting from a prompt (this tutorial) → split → add variations → generate → pick → img2img
The entry point changes. The middle and end are the same. That's why I call them three doors into the same building. No matter which one you walk through, you end up in the same place: generating hundreds of variations, reading the patterns, narrowing down, and polishing through img2img.
The prompt is never the destination. It's always just the first step.
Want to Build This System?
The prompt splitter and recipe tools I use for this process are available for Full Workshop members. The Prompt Insider tier gets you the enhanced prompt toolkit I use to upgrade prompts before splitting them.
If you're still copying prompts and hitting generate once, you're leaving 99% of the potential on the table.