Commit Graph

103 Commits

Author SHA1 Message Date
comfyanonymous
b6a60fa696 Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
5a9ddf94eb LoraLoader node now caches the lora file between executions. 2023-06-29 23:40:51 -04:00
comfyanonymous
62db11683b Move unet to device right after loading on highvram mode. 2023-06-29 20:43:06 -04:00
comfyanonymous
2c7c14de56 Support for SDXL text encoder lora. 2023-06-28 02:22:49 -04:00
comfyanonymous
9b93b920be Add CheckpointSave node to save checkpoints.
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.

Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32

Anything that patches the model weights like merging or loras will be
saved.

The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous
b72a7a835a Support loras based on the stability unet implementation. 2023-06-26 02:56:11 -04:00
comfyanonymous
20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous
b7933960bb Fix CLIPLoader node. 2023-06-24 13:56:46 -04:00
comfyanonymous
05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous
8607c2d42d Move latent scale factor from VAE to model. 2023-06-23 02:33:31 -04:00
comfyanonymous
30a3861946 Fix bug when yaml config has no clip params. 2023-06-23 01:12:59 -04:00
comfyanonymous
9e37f4c7d5 Fix error with ClipVision loader node. 2023-06-23 01:08:05 -04:00
comfyanonymous
9f83b098c9 Don't merge weights when shapes don't match and print a warning. 2023-06-22 19:08:31 -04:00
comfyanonymous
f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
51581dbfa9 Fix last commits causing an issue with the text encoder lora. 2023-06-20 19:44:39 -04:00
comfyanonymous
8125b51a62 Keep a set of model_keys for faster add_patches. 2023-06-20 19:08:48 -04:00
comfyanonymous
45beebd33c Add a type of model patch useful for model merging. 2023-06-20 17:34:11 -04:00
comfyanonymous
8883cb0f67 Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous
fb4bf7f591 This is not needed anymore and causes issues with alphas_cumprod. 2023-06-18 03:18:25 -04:00
comfyanonymous
f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous
6b774589a5 Set model to fp16 before loading the state dict to lower ram bump. 2023-06-14 12:48:02 -04:00
comfyanonymous
388567f20b sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous
ff9b22d79e Turn on safe load for a few models. 2023-06-13 10:12:03 -04:00
comfyanonymous
f0a2b81cd0 Cleanup: Remove a bunch of useless files. 2023-06-13 02:19:08 -04:00
comfyanonymous
f8c5931053 Split the batch in VAEEncode if there's not enough memory. 2023-06-12 00:21:50 -04:00
comfyanonymous
c069fc0730 Auto switch to tiled VAE encode if regular one runs out of memory. 2023-06-11 23:25:39 -04:00
comfyanonymous
de142eaad5 Simpler base model code. 2023-06-09 12:31:16 -04:00
comfyanonymous
0e425603fb Small refactor. 2023-06-06 13:23:01 -04:00
comfyanonymous
700491d81a Implement global average pooling for controlnet. 2023-06-03 01:49:03 -04:00
comfyanonymous
03da8a3426 This is useless for inference. 2023-05-31 13:03:24 -04:00
comfyanonymous
eb448dd8e1 Auto load model in lowvram if not enough memory. 2023-05-30 12:36:41 -04:00
comfyanonymous
a532888846 Support VAEs in diffusers format. 2023-05-28 02:02:09 -04:00
BlenderNeko
19c014f429 comment out annoying print statement 2023-05-12 23:57:40 +02:00
BlenderNeko
d9e088ddfd minor changes for tiled sampler 2023-05-12 23:49:09 +02:00
comfyanonymous
bae4fb4a9d Fix imports. 2023-05-04 18:10:29 -04:00
comfyanonymous
fcf513e0b6 Refactor. 2023-05-03 17:48:35 -04:00
pythongosssss
5eeecf3fd5 remove unused import 2023-05-03 18:21:23 +01:00
pythongosssss
8912623ea9 use comfy progress bar 2023-05-03 18:19:22 +01:00
pythongosssss
fdf57325f4 Merge remote-tracking branch 'origin/master' into tiled-progress 2023-05-03 17:33:42 +01:00
pythongosssss
27df74101e reduce duplication 2023-05-03 17:33:19 +01:00
pythongosssss
06ad35b493 added progress to encode + upscale 2023-05-02 19:18:07 +01:00
comfyanonymous
9c335a553f LoKR support. 2023-05-01 18:18:23 -04:00
pythongosssss
c8c9926eeb Add progress to vae decode tiled 2023-04-24 11:55:44 +01:00
comfyanonymous
5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
3696d1699a Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
comfyanonymous
884ea653c8 Add a way for nodes to set a custom CFG function. 2023-04-17 11:05:15 -04:00
comfyanonymous
73c3e11e83 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous
81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2023-04-15 18:46:58 -04:00
BlenderNeko
da115bd78d ensure backwards compat with optional args 2023-04-14 21:16:55 +02:00