Commit Graph

1479 Commits

Author SHA1 Message Date
silveroxides
cd3d2d5c62 remove unused imports 2025-04-11 16:22:04 +02:00
silveroxides
2c2481955d reduce code duplication 2025-04-11 16:18:57 +02:00
Silver
fc978a7ad8
Merge branch 'comfyanonymous:master' into chroma-support 2025-04-11 13:07:36 +02:00
Chargeuk
ed945a1790
Dependency Aware Node Caching for low RAM/VRAM machines (#7509)
* add dependency aware cache that removed a cached node as soon as all of its decendents have executed. This allows users with lower RAM to run workflows they would otherwise not be able to run. The downside is that every workflow will fully run each time even if no nodes have changed.

* remove test code

* tidy code
2025-04-11 06:55:51 -04:00
Silver
1c76711441
Merge branch 'comfyanonymous:master' into chroma-support 2025-04-10 18:16:10 +02:00
Chenlei Hu
98bdca4cb2
Deprecate InputTypeOptions.defaultInput (#7551)
* Deprecate InputTypeOptions.defaultInput

* nit

* nit
2025-04-10 06:57:06 -04:00
Jedrzej Kosinski
e346d8584e
Add prepare_sampling wrapper allowing custom nodes to more accurately report noise_shape (#7500) 2025-04-09 09:43:35 -04:00
silveroxides
3d375a153e fix mistake 2025-04-08 12:56:02 +02:00
silveroxides
e1af413722 Adjust memory usage factor and remove unnecessary code 2025-04-08 12:48:25 +02:00
Silver
c4f6874dc1
Merge branch 'comfyanonymous:master' into chroma-support 2025-04-07 19:21:09 +02:00
comfyanonymous
70d7242e57 Support the wan fun reward loras. 2025-04-07 05:01:47 -04:00
silveroxides
cb6ece9a18 set modelType.FLOW, will cause beta scheduler to work properly 2025-04-06 09:52:32 +02:00
Silver
de3f3ea28b
Merge branch 'comfyanonymous:master' into chroma-support 2025-04-06 07:31:00 +02:00
comfyanonymous
3bfe4e5276 Support 512 siglip model. 2025-04-05 07:01:01 -04:00
Silver
1ca0353b96
Merge branch 'comfyanonymous:master' into chroma-support 2025-04-05 12:58:40 +02:00
Raphael Walker
89e4ea0175
Add activations_shape info in UNet models (#7482)
* Add activations_shape info in UNet models

* activations_shape should be a list
2025-04-04 21:27:54 -04:00
comfyanonymous
3a100b9a55 Disable partial offloading of audio VAE. 2025-04-04 21:24:56 -04:00
Silver
7012015928
Merge branch 'comfyanonymous:master' into chroma-support 2025-04-03 04:09:39 +02:00
BiologicalExplosion
2222cf67fd
MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
BVH
301e26b131
Add option to store TE in bf16 (#7461) 2025-04-01 13:48:53 -04:00
Silver
cb3e388e8f
Merge branch 'comfyanonymous:master' into chroma-support 2025-04-01 16:42:40 +02:00
comfyanonymous
a3100c8452 Remove useless code. 2025-03-29 20:12:56 -04:00
comfyanonymous
2d17d8910c Don't error if wan concat image has extra channels. 2025-03-28 08:49:29 -04:00
silveroxides
2378c63ba9 update model_base.py 2025-03-27 18:43:58 +01:00
Silver
bf339e8265
Update supported_models.py 2025-03-27 18:29:51 +01:00
comfyanonymous
0a1f8869c9 Add WanFunInpaintToVideo node for the Wan fun inpaint models. 2025-03-27 11:13:27 -04:00
Silver
b7da8e2bc1
Add lora conversion if statement in lora.py 2025-03-27 03:08:24 +01:00
comfyanonymous
3661c833bc Support the WAN 2.1 fun control models.
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
Silver
159df22899
Merge branch 'comfyanonymous:master' into chroma-support 2025-03-25 21:38:54 +01:00
silveroxides
f04b502ab6 get_mdulations added from blepping and minor changes 2025-03-25 21:38:11 +01:00
comfyanonymous
8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
Silver
fe6e1fa44f
Set min_length to 1 2025-03-25 07:21:08 +01:00
Silver
4fa663d66f
Merge branch 'comfyanonymous:master' into chroma-support 2025-03-23 22:30:00 +01:00
Silver
9fa34e7216
Set min_length to 0 and remove attention_mask=True 2025-03-23 06:14:50 +01:00
comfyanonymous
e471c726e5 Fallback to pytorch attention if sage attention fails. 2025-03-22 15:45:56 -04:00
Silver
fda35f37e5
Add supported_inference_dtypes 2025-03-22 16:49:16 +01:00
Silver
ca73500269
remove unused imports 2025-03-22 16:20:08 +01:00
Silver
2710f77218
trim more trailing whitespace..oops 2025-03-22 16:18:17 +01:00
Silver
79f460150e
Remove trailing whitespace 2025-03-22 16:16:23 +01:00
silveroxides
0a4bc660d4 Upload files for Chroma Implementation 2025-03-22 15:27:11 +01:00
comfyanonymous
d9fa9d307f Automatically set the right sampling type for lotus. 2025-03-21 14:19:37 -04:00
thot experiment
83e839a89b
Native LotusD Implementation (#7125)
* draft pass at a native comfy implementation of Lotus-D depth and normal est

* fix model_sampling kludges

* fix ruff

---------

Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
comfyanonymous
3872b43d4b A few fixes for the hunyuan3d models. 2025-03-20 04:52:31 -04:00
comfyanonymous
32ca0805b7 Fix orientation of hunyuan 3d model. 2025-03-19 19:55:24 -04:00
comfyanonymous
11f1b41bab Initial Hunyuan3Dv2 implementation.
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
comfyanonymous
3b19fc76e3 Allow disabling pe in flux code for some other models. 2025-03-18 05:09:25 -04:00
comfyanonymous
50614f1b79 Fix regression with clip vision. 2025-03-17 13:56:11 -04:00
comfyanonymous
6dc7b0bfe3 Add support for giant dinov2 image encoder. 2025-03-17 05:53:54 -04:00
comfyanonymous
e8e990d6b8 Cleanup code. 2025-03-16 06:29:12 -04:00
Jedrzej Kosinski
2e24a15905
Call unpatch_hooks at the start of ModelPatcher.partially_unload (#7253)
* Call unpatch_hooks at the start of ModelPatcher.partially_unload

* Only call unpatch_hooks in partially_unload if lowvram is possible
2025-03-16 06:02:45 -04:00