TRANSMISSION · CHECKPOINT · DECRYPTING
checkpoint FREE

Checkpoints vs LoRAs vs Embeddings: What They Are and When to Use Each

admin · Apr 7, 2026 · 0 views · 6 min read

The Short Version

  • Checkpoint = the entire AI model. It determines the overall style and quality of your images. This is the foundation.
  • LoRA = a small add-on that modifies the checkpoint. It can add a specific character, art style, or quality improvement without replacing the whole model.
  • Embedding = a trained concept packed into a tiny file that works through the prompt. Usually used for negative prompts (things you want to avoid).

That's it. Now let's go deeper.


Checkpoints: The Foundation

A checkpoint is the big file — usually 2-7GB. It's the full AI model trained on millions of images. When you select a model in the dropdown at the top of A1111 or the Load Checkpoint node in ComfyUI, you're picking a checkpoint.

The checkpoint determines your ceiling. A realistic checkpoint will never produce good anime. An anime checkpoint will never produce photorealistic skin. You can't prompt your way past the checkpoint's training.

This is why I run every image through multiple checkpoints. Each one interprets the same prompt differently:

  • One checkpoint gives you better skin texture
  • Another nails the lighting but fumbles the hands
  • A third produces something you never expected

When to switch checkpoints: When the style of your output isn't what you want. No amount of prompt tweaking will turn an anime checkpoint into a realistic one. Switch the model.

How many do you need? Start with 1-2 that match the style you want. I currently rotate through 5 for my daily work, but I built up to that over time. More checkpoints = more variety, but also more disk space and more time spent comparing.

Where to find them: Civitai — filter by "Checkpoint" and sort by most downloaded or highest rated. Read the example images to see what the checkpoint is good at.


LoRAs: The Add-Ons

A LoRA (Low-Rank Adaptation) is a small file — usually 10-200MB — that modifies how a checkpoint behaves. You don't replace the checkpoint. You stack the LoRA on top of it.

LoRAs come in a few flavors:

Character LoRAs — trained on a specific character. Add it and that character appears in your images. Useful for fan art or maintaining a consistent character across many images.

Style LoRAs — trained on a specific art style. Painterly, cel-shaded, watercolor, film grain, etc. Changes the aesthetic without changing the checkpoint.

Quality/Detail LoRAs — trained to improve specific aspects. Better hands, better eyes, better fabric detail, better lighting. These are the subtle ones that make everything look a little more polished. The Ri-mix LoRA I recently added to my pipeline is one of these — it improves skin, shadows, and color nuance across the board.

Concept LoRAs — trained on a specific concept. A specific pose, clothing item, environment, or visual effect.

How to use them: Drop the .safetensors file into your models/Lora/ folder. In your prompt, add: <lora:filename:strength> where strength is usually between 0.3 and 1.0.

The strength slider matters a lot. Too low and the LoRA barely does anything. Too high and it overpowers everything — faces get weird, colors go wrong, the style breaks. Start at 0.6-0.7 for character LoRAs and 0.3-0.5 for style LoRAs. Adjust from there.


Stacking LoRAs

You can use multiple LoRAs at the same time. This is where it gets powerful — and where it gets tricky.

A typical stack might be:

  • 1 character LoRA at 0.7
  • 1 style LoRA at 0.4
  • 1 quality LoRA at 0.3

Rules of thumb:

  • Total LoRA weights shouldn't go too high. If you've got three LoRAs all at 1.0, they'll fight each other and produce garbage. Keep the total combined weight reasonable.
  • Character + style can conflict. If the character LoRA was trained on anime and the style LoRA is going for realism, they'll pull in opposite directions. Match your LoRAs to your checkpoint.
  • Quality LoRAs play nice with everything. They're usually safe to add to any stack at low weights (0.2-0.4).
  • Test each LoRA by itself first. Before stacking, generate a few images with just the one LoRA so you know what it does. Then add the others one at a time.

Embeddings: The Tiny Ones

Embeddings (also called Textual Inversions) are the smallest model type — usually under 100KB. They work through the prompt by adding a trained concept to the text encoder.

The most common use: negative prompt embeddings. Instead of typing out a long negative prompt listing everything you don't want, you use an embedding that was trained on all those bad qualities:

Negative prompt: EasyNegative, bad-hands-5

Those two words activate trained embeddings that tell the model to avoid bad anatomy, artifacts, and other common problems. Way easier than typing 50 negative tags.

How to use them: Drop the .pt or .safetensors file into your embeddings/ folder. Use the filename as a keyword in your prompt or negative prompt.

You don't need many. 1-2 good negative prompt embeddings cover most use cases. I use them on nearly every image but I don't think about them — they're just part of my baseline negative prompt.


How They Work Together

Think of it like cooking:

  • Checkpoint = the type of cuisine (Italian, Japanese, Mexican)
  • LoRAs = specific ingredients and seasonings that modify the dish
  • Embeddings = dietary restrictions — things to avoid

You pick a checkpoint that matches the overall style you want. You add LoRAs to push it in a specific direction — a character, a quality boost, a style tweak. You use embeddings to filter out the things you don't want.


Where to Start

If you're just beginning:

  1. Pick 1-2 checkpoints that match the style you want to make
  2. Download 1 negative prompt embedding (EasyNegative is the classic)
  3. Don't worry about LoRAs yet. Get comfortable with your checkpoint first. Learn what it can do with just prompts. Then start adding LoRAs when you want something your checkpoint can't do on its own.

Once you're stacking LoRAs and cycling through checkpoints, you're getting into the territory of my daily workflow — running the same prompt through multiple models and picking the best result. But that comes later. Walk before you run.


What's Next?

×