Commit Graph

549 Commits

Author SHA1 Message Date
comfyanonymous
2a23ba0b8c Fix unet ops not entirely on GPU. 2023-11-07 04:30:37 -05:00
comfyanonymous
844dbf97a7 Add: advanced->model->ModelSamplingDiscrete node.
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous
656c0b5d90 CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 2023-11-06 01:09:18 -05:00
gameltb
7e455adc07 fix unet_wrapper_function name in ModelPatcher 2023-11-05 17:11:44 +08:00
comfyanonymous
1ffa8858e7 Move model sampling code to comfy/model_sampling.py 2023-11-04 01:32:23 -04:00
comfyanonymous
ae2acfc21b Don't convert Nan to zero.
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
d2e27b48f1 sampler_cfg_function now gets the noisy output as argument again.
This should make things that use sampler_cfg_function behave like before.

Added an input argument for those that want the denoised output.

This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
2455aaed8a Allow model or clip to be None in load_lora_for_models. 2023-11-01 20:27:20 -04:00
comfyanonymous
ecb80abb58 Allow ModelSamplingDiscrete to be instantiated without a model config. 2023-11-01 19:13:03 -04:00
comfyanonymous
e73ec8c4da Not used anymore. 2023-11-01 00:01:30 -04:00
comfyanonymous
111f1b5255 Fix some issues with sampling precision. 2023-10-31 23:49:29 -04:00
comfyanonymous
7c0f255de1 Clean up percent start/end and make controlnets work with sigmas. 2023-10-31 22:14:32 -04:00
comfyanonymous
a268a574fa Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
1777b54d02 Sampling code changes.
apply_model in model_base now returns the denoised output.

This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
c837a173fa Fix some memory issues in sub quad attention. 2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead Fix some OOM issues with split attention. 2023-10-30 13:14:11 -04:00
comfyanonymous
a12cc05323 Add --max-upload-size argument, the default is 100MB. 2023-10-29 03:55:46 -04:00
comfyanonymous
2a134bfab9 Fix checkpoint loader with config. 2023-10-27 22:13:55 -04:00
comfyanonymous
e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 2023-10-27 15:54:04 -04:00
comfyanonymous
6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 2023-10-27 14:45:15 -04:00
comfyanonymous
434ce25ec0 Restrict loading embeddings from embedding folders. 2023-10-27 02:54:13 -04:00
comfyanonymous
723847f6b3 Faster clip image processing. 2023-10-26 01:53:01 -04:00
comfyanonymous
a373367b0c Fix some OOM issues with split and sub quad attention. 2023-10-25 20:17:28 -04:00
comfyanonymous
7fbb217d3a Fix uni_pc returning noisy image when steps <= 3 2023-10-25 16:08:30 -04:00
Jedrzej Kosinski
3783cb8bfd change 'c_adm' to 'y' in ControlNet.get_control 2023-10-25 08:24:32 -05:00
comfyanonymous
d1d2fea806 Pass extra conds directly to unet. 2023-10-25 00:07:53 -04:00
comfyanonymous
036f88c621 Refactor to make it easier to add custom conds to models. 2023-10-24 23:31:12 -04:00
comfyanonymous
3fce8881ca Sampling code refactor to make it easier to add more conds. 2023-10-24 03:38:41 -04:00
comfyanonymous
8594c8be4d Empty the cache when torch cache is more than 25% free mem. 2023-10-22 13:58:12 -04:00
comfyanonymous
8b65f5de54 attention_basic now works with hypertile. 2023-10-22 03:59:53 -04:00
comfyanonymous
e6bc42df46 Make sub_quad and split work with hypertile. 2023-10-22 03:51:29 -04:00
comfyanonymous
a0690f9df9 Fix t2i adapter issue. 2023-10-21 20:31:24 -04:00
comfyanonymous
9906e3efe3 Make xformers work with hypertile. 2023-10-21 13:23:03 -04:00
comfyanonymous
4185324a1d Fix uni_pc sampler math. This changes the images this sampler produces. 2023-10-20 04:16:53 -04:00
comfyanonymous
e6962120c6 Make sure cond_concat is on the right device. 2023-10-19 01:14:25 -04:00
comfyanonymous
45c972aba8 Refactor cond_concat into conditioning. 2023-10-18 20:36:58 -04:00
comfyanonymous
430a8334c5 Fix some potential issues. 2023-10-18 19:48:36 -04:00
comfyanonymous
782a24fce6 Refactor cond_concat into model object. 2023-10-18 16:48:37 -04:00
comfyanonymous
0d45a565da Fix memory issue related to control loras.
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
comfyanonymous
d44a2de49f Make VAE code closer to sgm. 2023-10-17 15:18:51 -04:00
comfyanonymous
23680a9155 Refactor the attention stuff in the VAE. 2023-10-17 03:19:29 -04:00
comfyanonymous
c8013f73e5 Add some Quadro cards to the list of cards with broken fp16. 2023-10-16 16:48:46 -04:00
comfyanonymous
bb064c9796 Add a separate optimized_attention_masked function. 2023-10-16 02:31:24 -04:00
comfyanonymous
fd4c5f07e7 Add a --bf16-unet to test running the unet in bf16. 2023-10-13 14:51:10 -04:00
comfyanonymous
9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 2023-10-13 14:41:17 -04:00
comfyanonymous
88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 2023-10-11 21:30:57 -04:00
comfyanonymous
20d3852aa1 Pull some small changes from the other repo. 2023-10-11 20:38:48 -04:00
comfyanonymous
ac7d8cfa87 Allow attn_mask in attention_pytorch. 2023-10-11 20:38:48 -04:00
comfyanonymous
1a4bd9e9a6 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00