825 Commits

Author SHA1 Message Date
comfyanonymous
69c8d6d8a6 Single and dual clip loader nodes support SD3.
You can use the CLIPLoader to use the t5xxl only or the DualCLIPLoader to
use CLIP-L and CLIP-G only for sd3.
2024-06-11 23:27:39 -04:00
comfyanonymous
0e49211a11 Load the SD3 T5xxl model in the same dtype stored in the checkpoint. 2024-06-11 17:03:26 -04:00
comfyanonymous
5889b7ca0a Support multiple text encoder configurations on SD3. 2024-06-11 13:14:43 -04:00
comfyanonymous
9424522ead Reuse code. 2024-06-11 07:20:26 -04:00
Dango233
73ce178021
Remove redundancy in mmdit.py (#3685) 2024-06-11 06:30:25 -04:00
comfyanonymous
a82fae2375 Fix bug with cosxl edit model. 2024-06-10 16:00:03 -04:00
comfyanonymous
8c4a9befa7 SD3 Support. 2024-06-10 14:06:23 -04:00
comfyanonymous
a5e6a632f9 Support sampling non 2D latents. 2024-06-10 01:31:09 -04:00
comfyanonymous
742d5720d1 Support zeroing out text embeddings with the attention mask. 2024-06-09 16:51:58 -04:00
comfyanonymous
6cd8ffc465 Reshape the empty latent image to the right amount of channels if needed. 2024-06-08 02:35:08 -04:00
comfyanonymous
56333d4850 Use the end token for the text encoder attention mask. 2024-06-07 03:05:23 -04:00
comfyanonymous
104fcea0c8 Add function to get the list of currently loaded models. 2024-06-05 23:25:16 -04:00
comfyanonymous
b1fd26fe9e pytorch xpu should be flash or mem efficient attention? 2024-06-04 17:44:14 -04:00
comfyanonymous
809cc85a8e Remove useless code. 2024-06-02 19:23:37 -04:00
comfyanonymous
b249862080 Add an annoying print to a function I want to remove. 2024-06-01 12:47:31 -04:00
comfyanonymous
bf3e334d46 Disable non_blocking when --deterministic or directml. 2024-05-30 11:07:38 -04:00
JettHu
b26da2245f
Fix UnetParams annotation typo (#3589) 2024-05-27 19:30:35 -04:00
comfyanonymous
0920e0e5fe Remove some unused imports. 2024-05-27 19:08:27 -04:00
comfyanonymous
ffc4b7c30e Fix DORA strength.
This is a different version of #3298 with more correct behavior.
2024-05-25 02:50:11 -04:00
comfyanonymous
efa5a711b2 Reduce memory usage when applying DORA: #3557 2024-05-24 23:36:48 -04:00
comfyanonymous
6c23854f54 Fix OSX latent2rgb previews. 2024-05-22 13:56:28 -04:00
Chenlei Hu
7718ada4ed
Add type annotation UnetWrapperFunction (#3531)
* Add type annotation UnetWrapperFunction

* nit

* Add types.py
2024-05-22 02:07:27 -04:00
comfyanonymous
8508df2569 Work around black image bug on Mac 14.5 by forcing attention upcasting. 2024-05-21 16:56:33 -04:00
comfyanonymous
83d969e397 Disable xformers when tracing model. 2024-05-21 13:55:49 -04:00
comfyanonymous
1900e5119f Fix potential issue. 2024-05-20 08:19:54 -04:00
comfyanonymous
09e069ae6c Log the pytorch version. 2024-05-20 06:22:29 -04:00
comfyanonymous
11a2ad5110 Fix controlnet not upcasting on models that have it enabled. 2024-05-19 17:58:03 -04:00
comfyanonymous
0bdc2b15c7 Cleanup. 2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9 Remove unnecessary code. 2024-05-18 09:36:44 -04:00
comfyanonymous
19300655dd Don't automatically switch to lowvram mode on GPUs with low memory. 2024-05-17 00:31:32 -04:00
comfyanonymous
46daf0a9a7 Add debug options to force on and off attention upcasting. 2024-05-16 04:09:41 -04:00
comfyanonymous
2d41642716 Fix lowvram dora issue. 2024-05-15 02:47:40 -04:00
comfyanonymous
ec6f16adb6 Fix SAG. 2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837 Only enable attention upcasting on models that actually need it. 2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c Refactor attention upcasting code part 1. 2024-05-14 12:47:31 -04:00
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. (#3459)
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.

* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous
fa6dd7e5bb Fix lowvram issue with saving checkpoints.
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous
49c20cdc70 No longer necessary. 2024-05-12 05:34:43 -04:00
comfyanonymous
e1489ad257 Fix issue with lowvram mode breaking model saving. 2024-05-11 21:55:20 -04:00
comfyanonymous
93e876a3be Remove warnings that confuse people. 2024-05-09 05:29:42 -04:00
comfyanonymous
cd07340d96 Typo fix. 2024-05-08 18:36:56 -04:00
comfyanonymous
c61eadf69a Make the load checkpoint with config function call the regular one.
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.

The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. (#3388) 2024-05-02 03:26:50 -04:00
comfyanonymous
f81a6fade8 Fix some edge cases with samplers and arrays with a single sigma. 2024-05-01 17:05:30 -04:00
comfyanonymous
2aed53c4ac Workaround xformers bug. 2024-04-30 21:23:40 -04:00
Garrett Sutula
bacce529fb
Add TLS Support (#3312)
* Add TLS Support

* Add to readme

* Add guidance for windows users on generating certificates

* Add guidance for windows users on generating certificates

* Fix typo
2024-04-30 20:17:02 -04:00
Jedrzej Kosinski
7990ae18c1
Fix error when more cond masks passed in than batch size (#3353) 2024-04-26 12:51:12 -04:00
comfyanonymous
8dc19e40d1 Don't init a VAE model when there are no VAE weights. 2024-04-24 09:20:31 -04:00
comfyanonymous
c59fe9f254 Support VAE without quant_conv. 2024-04-18 21:05:33 -04:00
comfyanonymous
719fb2c81d Add basic PAG node. 2024-04-14 23:49:50 -04:00