什么是 LoRA?
Textual Inversion 虽然是将“用文本难以说明的外观”塞进 1 个单词的技术,但没有让模型从零画出原本不知道的东西的能力。
当想“让模型也能画出原本画不出的东西!”时,以前必须微调整个模型。
但是,学习成本相当高。
于是开始被使用的,就是原本在 LLM 中使用的 LoRA(Low-Rank Adaptation)。
LoRA 不是重写模型权重本身,而是采用将“变更部分”作为小的追加数据保存在外部的方式。
感觉就像是对基础模型,后来读取扩展包一样,可以增加新的风格和角色。
应用了 LoRA 的 text2image
LoRA 的下载
这次作为例子,使用变成像素艺术风的 LoRA。
- 8bitdiffuser 64x
-
📂ComfyUI/ └── 📂models/ └── 📂loras/ └── PX64NOCAP_epoch_10.safetensors
工作流

{
"id": "8b9f7796-0873-4025-be3c-0f997f67f866",
"revision": 0,
"last_node_id": 11,
"last_link_id": 15,
"nodes": [
{
"id": 8,
"type": "VAEDecode",
"pos": [
1209,
188
],
"size": [
210,
46
],
"flags": {},
"order": 7,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 7
},
{
"name": "vae",
"type": "VAE",
"link": 10
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"slot_index": 0,
"links": [
9
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.33",
"Node name for S&R": "VAEDecode"
},
"widgets_values": []
},
{
"id": 9,
"type": "SaveImage",
"pos": [
1451,
189
],
"size": [
354.2876035004722,
433.23967321788405
],
"flags": {},
"order": 8,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 9
}
],
"outputs": [],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.33"
},
"widgets_values": [
"ComfyUI"
]
},
{
"id": 7,
"type": "CLIPTextEncode",
"pos": [
416.1970166015625,
392.37848510742185
],
"size": [
410.75801513671877,
158.82607910156253
],
"flags": {},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 14
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
6
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.33",
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"text, watermark"
]
},
{
"id": 5,
"type": "EmptyLatentImage",
"pos": [
582.1350317382813,
606.5799999999999
],
"size": [
244.81999999999994,
106
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"slot_index": 0,
"links": [
2
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.33",
"Node name for S&R": "EmptyLatentImage"
},
"widgets_values": [
512,
512,
1
]
},
{
"id": 11,
"type": "LoraLoader",
"pos": [
82.16589030803895,
333.495116453795
],
"size": [
280.9090909090909,
126
],
"flags": {},
"order": 3,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 11
},
{
"name": "clip",
"type": "CLIP",
"link": 15
}
],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
12
]
},
{
"name": "CLIP",
"type": "CLIP",
"links": [
13,
14
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.76",
"Node name for S&R": "LoraLoader"
},
"widgets_values": [
"1.5\\1.5-dpo-LoRA.safetensors",
1,
1
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
415,
186
],
"size": [
411.95503173828126,
151.0030493164063
],
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 13
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
4
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.33",
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"pixel_art,high quality,Illustration of a single red rose in a vase"
],
"color": "#432",
"bgcolor": "#653"
},
{
"id": 10,
"type": "VAELoader",
"pos": [
896.9256198347109,
68.77178286934158
],
"size": [
281.0743801652891,
58
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "VAE",
"type": "VAE",
"links": [
10
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.76",
"Node name for S&R": "VAELoader"
},
"widgets_values": [
"vae-ft-mse-840000-ema-pruned.safetensors"
]
},
{
"id": 4,
"type": "CheckpointLoaderSimple",
"pos": [
-264.15536196608537,
333.495116453795
],
"size": [
315,
98
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"slot_index": 0,
"links": [
11
]
},
{
"name": "CLIP",
"type": "CLIP",
"slot_index": 1,
"links": [
15
]
},
{
"name": "VAE",
"type": "VAE",
"slot_index": 2,
"links": []
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.33",
"Node name for S&R": "CheckpointLoaderSimple"
},
"widgets_values": [
"v1-5-pruned-emaonly-fp16.safetensors"
]
},
{
"id": 3,
"type": "KSampler",
"pos": [
863,
186
],
"size": [
315,
262
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 12
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 4
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 6
},
{
"name": "latent_image",
"type": "LATENT",
"link": 2
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"slot_index": 0,
"links": [
7
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.33",
"Node name for S&R": "KSampler"
},
"widgets_values": [
1234,
"fixed",
20,
8,
"euler",
"normal",
1
]
}
],
"links": [
[
2,
5,
0,
3,
3,
"LATENT"
],
[
4,
6,
0,
3,
1,
"CONDITIONING"
],
[
6,
7,
0,
3,
2,
"CONDITIONING"
],
[
7,
3,
0,
8,
0,
"LATENT"
],
[
9,
8,
0,
9,
0,
"IMAGE"
],
[
10,
10,
0,
8,
1,
"VAE"
],
[
11,
4,
0,
11,
0,
"MODEL"
],
[
12,
11,
0,
3,
0,
"MODEL"
],
[
13,
11,
1,
6,
0,
"CLIP"
],
[
14,
11,
1,
7,
0,
"CLIP"
],
[
15,
4,
1,
11,
1,
"CLIP"
]
],
"groups": [],
"config": {},
"extra": {
"ds": {
"scale": 0.8264462809917354,
"offset": [
364.15536196608537,
32.43821713065842
]
},
"frontendVersion": "1.34.6",
"VHS_latentpreview": false,
"VHS_latentpreviewrate": 0,
"VHS_MetadataImage": true,
"VHS_KeepIntermediate": true
},
"version": 0.4
}
- 🟩 添加
Load LoRA节点。- 以夹在
Load Checkpoint和CLIP Text Encode/KSampler之间的形式连接。 - MODEL 和 CLIP 两者都需要通过
Load LoRA。
- 以夹在
strength_model/strength_clip: LoRA 的适用强度。基本是1.0,但如果效力太强就降低。- 🟨 触发词
- 虽然只是应用了 LoRA,但在内部,画点阵图的能力已经被叠加到了基础模型上。
- 但是,为了切实引出那个能力,需要在提示词中包含作者在学习时使用的词。
- 把这个称为触发词。这次的 LoRA 中
pixel_art是触发词。
Flux.1 以后的模型和 LoRA
图像生成 AI 的设计思想的变更
在 Stable Diffusion 1.5 和 SDXL 中,应用 LoRA 时,一般是将作为图像生成核心的扩散模型,和解释提示词的文本编码器两者都作为学习对象。
但是,在 Flux.1 以后的模型中,文本编码器开始采用 T5 和 Qwen 这样的大规模语言模型。
这些就像小型的 ChatGPT 一样,已经具备了通用的语言理解能力,为了图像生成而让其再学习是很低效的,甚至有可能性能下降。
因此,在最新的模型中,固定文本编码器,只学习扩散模型本体的设计成为了主流。
LoRA 也追随
LoRA 也追随这个。
直到 SDXL,扩散模型和文本编码器两者都学习,
但在 Flux.1 以后的模型中,LoRA 的学习・应用也是,仅限扩散模型。
ComfyUI 工作流的变化
虽然使用 Load LoRA 节点也可以,但连接节点到没在用的 CLIP 也不太美观。
这不,取而代之准备了 LoraLoaderModelOnly 节点。
正如其名,是仅对 MODEL(扩散模型)应用 LoRA 的节点。

{
"id": "18404b37-92b0-4d11-a39c-ae941838eb83",
"revision": 0,
"last_node_id": 45,
"last_link_id": 65,
"nodes": [
{
"id": 35,
"type": "FluxGuidance",
"pos": [
836,
190
],
"size": [
211.60000610351562,
58
],
"flags": {},
"order": 7,
"mode": 0,
"inputs": [
{
"name": "conditioning",
"type": "CONDITIONING",
"link": 56
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
57
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "FluxGuidance"
},
"widgets_values": [
3.5
]
},
{
"id": 33,
"type": "CLIPTextEncode",
"pos": [
518,
378
],
"size": [
414.71820068359375,
108.47611236572266
],
"flags": {
"collapsed": true
},
"order": 5,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 60
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
55
]
}
],
"title": "CLIP Text Encode (Negative Prompt)",
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
""
]
},
{
"id": 42,
"type": "DualCLIPLoader",
"pos": [
185.0587921142578,
235.1116485595703
],
"size": [
270,
130
],
"flags": {},
"order": 0,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "CLIP",
"type": "CLIP",
"links": [
59,
60
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "DualCLIPLoader"
},
"widgets_values": [
"clip_l.safetensors",
"t5xxl_fp8_e4m3fn.safetensors",
"flux",
"default"
]
},
{
"id": 41,
"type": "UNETLoader",
"pos": [
527.2304526084715,
34.5730778881735
],
"size": [
270,
82
],
"flags": {},
"order": 1,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
63
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "UNETLoader"
},
"widgets_values": [
"Flux\\flux1-dev-fp8.safetensors",
"default"
]
},
{
"id": 8,
"type": "VAEDecode",
"pos": [
1408,
190
],
"size": [
140,
46
],
"flags": {},
"order": 9,
"mode": 0,
"inputs": [
{
"name": "samples",
"type": "LATENT",
"link": 52
},
{
"name": "vae",
"type": "VAE",
"link": 62
}
],
"outputs": [
{
"name": "IMAGE",
"type": "IMAGE",
"slot_index": 0,
"links": [
65
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "VAEDecode"
},
"widgets_values": []
},
{
"id": 44,
"type": "LoraLoaderModelOnly",
"pos": [
828.5090970126064,
34.5730778881735
],
"size": [
219.09090909090924,
82
],
"flags": {},
"order": 6,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 63
}
],
"outputs": [
{
"name": "MODEL",
"type": "MODEL",
"links": [
64
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.76",
"Node name for S&R": "LoraLoaderModelOnly"
},
"widgets_values": [
"Flux.1\\AWPortrait-FL-lora.safetensors",
0.8
],
"color": "#232",
"bgcolor": "#353"
},
{
"id": 27,
"type": "EmptySD3LatentImage",
"pos": [
795.1570061035156,
471
],
"size": [
252.44299999999998,
108.66200000000003
],
"flags": {},
"order": 2,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"slot_index": 0,
"links": [
51
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "EmptySD3LatentImage"
},
"widgets_values": [
1024,
1024,
1
]
},
{
"id": 6,
"type": "CLIPTextEncode",
"pos": [
507,
190
],
"size": [
301.84503173828125,
128.01304626464844
],
"flags": {},
"order": 4,
"mode": 0,
"inputs": [
{
"name": "clip",
"type": "CLIP",
"link": 59
}
],
"outputs": [
{
"name": "CONDITIONING",
"type": "CONDITIONING",
"slot_index": 0,
"links": [
56
]
}
],
"title": "CLIP Text Encode (Positive Prompt)",
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "CLIPTextEncode"
},
"widgets_values": [
"Fashion magazine style portrait of a striking young woman with sharp, defined features, confident gaze straight into the camera, minimal but edgy makeup with bold eyeliner and matte lips, sleek blunt bob haircut in deep black, wearing a modern monochrome outfit: structured black blazer over a crisp white top, subtle silver jewelry, standing against a clean architectural background of concrete and glass, slightly off-center composition, shot with an 85mm lens at f/2.0, crisp details on face and clothing, background softly blurred, cool-toned color grading with a hint of teal and orange, high-end editorial lighting with clear contrast and soft shadows, contemporary fashion photography"
]
},
{
"id": 31,
"type": "KSampler",
"pos": [
1070,
190
],
"size": [
315,
262
],
"flags": {},
"order": 8,
"mode": 0,
"inputs": [
{
"name": "model",
"type": "MODEL",
"link": 64
},
{
"name": "positive",
"type": "CONDITIONING",
"link": 57
},
{
"name": "negative",
"type": "CONDITIONING",
"link": 55
},
{
"name": "latent_image",
"type": "LATENT",
"link": 51
}
],
"outputs": [
{
"name": "LATENT",
"type": "LATENT",
"slot_index": 0,
"links": [
52
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "KSampler"
},
"widgets_values": [
1234,
"fixed",
20,
1,
"euler",
"normal",
1
]
},
{
"id": 43,
"type": "VAELoader",
"pos": [
1174.5506464243365,
71.00368181687476
],
"size": [
210,
58
],
"flags": {
"collapsed": false
},
"order": 3,
"mode": 0,
"inputs": [],
"outputs": [
{
"name": "VAE",
"type": "VAE",
"links": [
62
]
}
],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.39",
"Node name for S&R": "VAELoader"
},
"widgets_values": [
"ae.safetensors"
]
},
{
"id": 45,
"type": "SaveImage",
"pos": [
1579.382263188637,
190
],
"size": [
375.4432999999999,
426.65870000000007
],
"flags": {},
"order": 10,
"mode": 0,
"inputs": [
{
"name": "images",
"type": "IMAGE",
"link": 65
}
],
"outputs": [],
"properties": {
"cnr_id": "comfy-core",
"ver": "0.3.76"
},
"widgets_values": [
"ComfyUI"
]
}
],
"links": [
[
51,
27,
0,
31,
3,
"LATENT"
],
[
52,
31,
0,
8,
0,
"LATENT"
],
[
55,
33,
0,
31,
2,
"CONDITIONING"
],
[
56,
6,
0,
35,
0,
"CONDITIONING"
],
[
57,
35,
0,
31,
1,
"CONDITIONING"
],
[
59,
42,
0,
6,
0,
"CLIP"
],
[
60,
42,
0,
33,
0,
"CLIP"
],
[
62,
43,
0,
8,
1,
"VAE"
],
[
63,
41,
0,
44,
0,
"MODEL"
],
[
64,
44,
0,
31,
0,
"MODEL"
],
[
65,
8,
0,
45,
0,
"IMAGE"
]
],
"groups": [],
"config": {},
"extra": {
"ds": {
"scale": 0.9090909090909091,
"offset": [
-85.05879211425781,
65.4269221118265
]
},
"frontendVersion": "1.34.5",
"VHS_latentpreview": false,
"VHS_latentpreviewrate": 0,
"VHS_MetadataImage": true,
"VHS_KeepIntermediate": true
},
"version": 0.4
}
在新的模型中,像这样应用 LoRA。请记住。