Possible problems and solutions when using LoRA with Stable Diffusion

Possible problems and solutions when using LoRA with Stable Diffusion

 1. Installation and environmental issues

 1.1. Library version mismatch

 - Error: Library versions such as `torch, xformers, diffusers` do not match

 - Solved: Install the latest version with the following command

 

 pip install --upgrade torch torchvision torchaudio xformers diffusers

 

 1.2. Python version issue

 - Error: `SyntaxError: invalid syntax` or `ModuleNotFoundError` occurred.

- Resolution: Python 3.10 or higher is recommended (Colab defaults to 3.9)

 

 !apt install python3.10

 !update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1

 

 2. Problems with LoRA model

 2.1. Model file path issue

 - Error: `FileNotFoundError: No such file or directory`

- Cause: LoRA model not placed in **correct folder**

 - Solved: LoRA model must be placed in the folder below

 

 stable-diffusion-webui/models/Lora

 

 2.2. Model file is corrupted

 - Error: `RuntimeError: unexpected EOF while reading`

 - Solved: Re-download and overwrite

 

 wget -O models/Lora/example.safetensors https://huggingface.co/path/to/lora.safetensors

 

 2.3. Unsupported model format

 - Error: `RuntimeError: Unrecognized file format`

 - Solved: The `.safetensors` format is the safest and recommended.

 3. Issues with scripts and WebUI

 3.1. WebUI does not recognize LoRA

 - Error: LoRA model does not apply or cannot be selected

 - solve:

 - Install `extensions/sd-webui-additional-networks` plugin

  - Check how LoRA is applied (LoRA must be turned on in `Stable Diffusion`)

 3.2. `lora` option is missing or not applicable

 - Cause: `--enable-insecure-extension-access` option is required.

 - Solved: Add the following option when running

 

 python launch.py ​​​​​​​​ --enable-insecure-extension-access

 

 4. Generated image problems

4.1. LoRA model applied too strongly (over-application problem)

 - solve:

  - Adjust `LoRA Strength` value to **0.5~0.7**

  - Adjust `weight` at the prompt (`<lora:model_name:0.6>`)

 4.2. Image quality deteriorates when applying LoRA

 - solve:

  - Check compatibility between Base Model and LoRA**

  -Replace VAE model (`anything-v4.vae.pt` recommended)

 5. VRAM related issues

 5.1. `CUDA Out of Memory` error occurred

 - solve:

 - VRAM optimization by adding `--xformers` option

   

 python launch.py ​​--xformers

   

 - Reduce image resolution (`512x512` or `768x768`)

 5.2. Insufficient memory when using local GPU

 - solve:

  - Using Google Colab **T4, A100**

 - Added `lowvram` option

   

 python launch.py ​​--medvram --opt-split-attention

   

Comments