LoRA Training Guide
Why train LoRAs?
Even with advanced AI models, you can't always get consistent results.
Consistency & Control
Generate 1,000 images, maybe 100 match your vision. Train a LoRA on those 100 images, and you'll get nearly 100% consistent results going forward. The model learns exactly what you want.
Cost & Speed
LoRAs cut generation costs by 4-5x while running 4-5x faster.
How many images do you need?
Depends on your LoRA's complexity and intended use cases.
Think about all the scenarios where you'll use your LoRA, then include training examples for each.
Example: If you train a spritesheet LoRA with 100 character images but no buildings, the LoRA won't work for buildings. Add examples for every use case you need.
Guidelines:
- Simple concepts (single style): 15-30 images
- Medium complexity (character + variations): 30-60 images
- Complex concepts (multiple use cases): 60-100+ images
Paired images (for image-editing LoRAs)
For image editing tasks, you need paired images — one "before" state, one "after" state.
Naming convention
Use _start and _end suffixes:
image001_start.jpg→ The original/input imageimage001_end.jpg→ The target/output image
Both images must share the same base name. The system matches pairs by name.
Example: Background removal LoRA
| File | Description |
|---|---|
photo001_start.jpg | Original photo with background |
photo001_end.jpg | Same photo with background removed |
Multiple input images
For models that accept multiple reference images, extend the naming system:
Naming convention
_start→ First input image_start1→ Second input image_start2→ Third input image_end→ Target output image
Example: Virtual try-on LoRA
| File | Content |
|---|---|
sample035_start.jpg | Woman portrait |
sample035_start1.jpg | Glasses photo |
sample035_start2.jpg | Hat photo |
sample035_end.jpg | Portrait with woman wearing glasses and hat |
Adding captions (optional)
You can improve training quality by providing text descriptions for each image set.
How to add captions
Create a .txt file with the same base name as your images:
File: sample035.txt
Recreate the portrait by placing the glasses from the second image
and the hat from the third image on the woman in the first image.
This helps the model understand the relationship between inputs and outputs.
Training parameters
Steps
The number of times the model processes your training data.
- Too few steps → Model doesn't learn enough
- Too many steps → Overfitting (memorizes rather than learns)
Starting point: For ~20 paired images, try 1,000 steps.
Learning rate
How much the model adjusts its weights with each step.
The balloon analogy:
| Factor | Balloon equivalent |
|---|---|
| Steps | How many times you blow |
| Learning rate | How hard you blow each time |
- Blow too softly (low LR) → Need more breaths to reach target size
- Blow too hard (high LR) → Risk popping the balloon
Find the sweet spot: efficient per step, but not so aggressive that training becomes unstable.
Typical values: 1e-4 to 5e-4. Adjust based on dataset size and complexity.
After training
Training outputs a .safetensors file — this is your LoRA.
How to use it
- Go to the inference/generation page for your base model
- Add your LoRA file URL to the LoRA input field
- Generate with your custom-trained model
Your LoRA adapts the base model to match your training data while keeping the model's general capabilities.
Training checklist
- Prepare dataset: Collect images covering all intended use cases
- Name files correctly: Use
_start/_endsuffixes for paired images - Add captions (optional): Create
.txtfiles with descriptions - Set steps: Start with 1,000, adjust as needed
- Configure learning rate: Start with
1e-4 - Upload and train
- Test your LoRA: Generate samples to verify quality
Tips
💡 Quality over quantity — 20 good images beat 100 mediocre ones
💡 Cover your variations — Include examples for all use cases
💡 Match resolutions — Keep training images at consistent dimensions
💡 Iterate — First attempt rarely perfect; refine and retrain
Start Training Now
You've learned the basics — now put it into practice.