Commit Graph

1465 Commits

Author SHA1 Message Date
Kohaku-Blueleaf
b8bac6558a
Merge e8f3bc5ab7 into 22ad513c72 2025-04-11 08:18:20 -04:00
Chargeuk
ed945a1790
Dependency Aware Node Caching for low RAM/VRAM machines (#7509)
* add dependency aware cache that removed a cached node as soon as all of its decendents have executed. This allows users with lower RAM to run workflows they would otherwise not be able to run. The downside is that every workflow will fully run each time even if no nodes have changed.

* remove test code

* tidy code
2025-04-11 06:55:51 -04:00
Chenlei Hu
98bdca4cb2
Deprecate InputTypeOptions.defaultInput (#7551)
* Deprecate InputTypeOptions.defaultInput

* nit

* nit
2025-04-10 06:57:06 -04:00
Jedrzej Kosinski
e346d8584e
Add prepare_sampling wrapper allowing custom nodes to more accurately report noise_shape (#7500) 2025-04-09 09:43:35 -04:00
Kohaku-Blueleaf
e8f3bc5ab7 Finalize the modularized weight adapter impl
* LoRA/LoHa/LoKr/GLoRA working well
* Removed TONS of code in lora.py
2025-04-09 09:16:52 +08:00
Kohaku-Blueleaf
889f94773a Remove unused import 2025-04-08 22:01:43 +08:00
Kohaku-Blueleaf
ff050275ab Use correct v list 2025-04-08 18:48:58 +08:00
Kohaku-Blueleaf
a220e5ca80 Fix typing syntax error 2025-04-08 18:46:53 +08:00
Kohaku-Blueleaf
726fdfcaa0 Fix import error 2025-04-08 18:46:43 +08:00
Kohaku-Blueleaf
88d9168df0
Sync (#1)
* Allow disabling pe in flux code for some other models.

* Initial Hunyuan3Dv2 implementation.

Supports the multiview, mini, turbo models and VAEs.

* Fix orientation of hunyuan 3d model.

* A few fixes for the hunyuan3d models.

* Update frontend to 1.13 (#7331)

* Add backend primitive nodes (#7328)

* Add backend primitive nodes

* Add control after generate to int primitive

* Nodes to convert images to YUV and back.

Can be used to convert an image to black and white.

* Update frontend to 1.14 (#7343)

* Native LotusD Implementation (#7125)

* draft pass at a native comfy implementation of Lotus-D depth and normal est

* fix model_sampling kludges

* fix ruff

---------

Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>

* Automatically set the right sampling type for lotus.

* support output normal and lineart once (#7290)

* [nit] Format error strings (#7345)

* ComfyUI version v0.3.27

* Fallback to pytorch attention if sage attention fails.

* Add model merging node for WAN 2.1

* Add Hunyuan3D to readme.

* Support more float8 types.

* Add CFGZeroStar node.

Works on all models that use a negative prompt but is meant for rectified
flow models.

* Support the WAN 2.1 fun control models.

Use the new WanFunControlToVideo node.

* Add WanFunInpaintToVideo node for the Wan fun inpaint models.

* Update frontend to 1.14.6 (#7416)

Cherry-pick the fix: https://github.com/Comfy-Org/ComfyUI_frontend/pull/3252

* Don't error if wan concat image has extra channels.

* ltxv: fix preprocessing exception when compression is 0. (#7431)

* Remove useless code.

* Fix latent composite node not working when source has alpha.

* Fix alpha channel mismatch on destination in ImageCompositeMasked

* Add option to store TE in bf16 (#7461)

* User missing (#7439)

* Ensuring a 401 error is returned when user data is not found in multi-user context.

* Returning a 401 error when provided comfy-user does not exists on server side.

* Fix comment.

This function does not support quads.

* MLU memory optimization (#7470)

Co-authored-by: huzhan <huzhan@cambricon.com>

* Fix alpha image issue in more nodes.

* Fix problem.

* Disable partial offloading of audio VAE.

* Add activations_shape info in UNet models (#7482)

* Add activations_shape info in UNet models

* activations_shape should be a list

* Support 512 siglip model.

* Show a proper error to the user when a vision model file is invalid.

* Support the wan fun reward loras.

---------

Co-authored-by: comfyanonymous <comfyanonymous@protonmail.com>
Co-authored-by: Chenlei Hu <hcl@comfy.org>
Co-authored-by: thot experiment <94414189+thot-experiment@users.noreply.github.com>
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Co-authored-by: Terry Jia <terryjia88@gmail.com>
Co-authored-by: Michael Kupchick <michael@lightricks.com>
Co-authored-by: BVH <82035780+bvhari@users.noreply.github.com>
Co-authored-by: Laurent Erignoux <lerignoux@gmail.com>
Co-authored-by: BiologicalExplosion <49753622+BiologicalExplosion@users.noreply.github.com>
Co-authored-by: huzhan <huzhan@cambricon.com>
Co-authored-by: Raphael Walker <slickytail.mc@gmail.com>
2025-04-08 18:38:44 +08:00
comfyanonymous
70d7242e57 Support the wan fun reward loras. 2025-04-07 05:01:47 -04:00
comfyanonymous
3bfe4e5276 Support 512 siglip model. 2025-04-05 07:01:01 -04:00
Raphael Walker
89e4ea0175
Add activations_shape info in UNet models (#7482)
* Add activations_shape info in UNet models

* activations_shape should be a list
2025-04-04 21:27:54 -04:00
comfyanonymous
3a100b9a55 Disable partial offloading of audio VAE. 2025-04-04 21:24:56 -04:00
BiologicalExplosion
2222cf67fd
MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
Kohaku-Blueleaf
84317474fd lint 2025-04-02 09:31:24 +08:00
Kohaku-Blueleaf
c40686eb42 Utilize new weight adapter in lora.py
For calculate weight I implement a fallback mechnism temporary for dev
2025-04-02 09:22:05 +08:00
Kohaku-Blueleaf
4774c3244e Initial impl
LoRA load/calculate_weight
LoHa/LoKr/GLoRA load
2025-04-02 09:21:39 +08:00
Kohaku-Blueleaf
6fb4cc0179 Weight Adapter Scheme 2025-04-02 09:21:17 +08:00
BVH
301e26b131
Add option to store TE in bf16 (#7461) 2025-04-01 13:48:53 -04:00
comfyanonymous
a3100c8452 Remove useless code. 2025-03-29 20:12:56 -04:00
comfyanonymous
2d17d8910c Don't error if wan concat image has extra channels. 2025-03-28 08:49:29 -04:00
comfyanonymous
0a1f8869c9 Add WanFunInpaintToVideo node for the Wan fun inpaint models. 2025-03-27 11:13:27 -04:00
comfyanonymous
3661c833bc Support the WAN 2.1 fun control models.
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
comfyanonymous
8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
comfyanonymous
e471c726e5 Fallback to pytorch attention if sage attention fails. 2025-03-22 15:45:56 -04:00
comfyanonymous
d9fa9d307f Automatically set the right sampling type for lotus. 2025-03-21 14:19:37 -04:00
thot experiment
83e839a89b
Native LotusD Implementation (#7125)
* draft pass at a native comfy implementation of Lotus-D depth and normal est

* fix model_sampling kludges

* fix ruff

---------

Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
comfyanonymous
3872b43d4b A few fixes for the hunyuan3d models. 2025-03-20 04:52:31 -04:00
comfyanonymous
32ca0805b7 Fix orientation of hunyuan 3d model. 2025-03-19 19:55:24 -04:00
comfyanonymous
11f1b41bab Initial Hunyuan3Dv2 implementation.
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
comfyanonymous
3b19fc76e3 Allow disabling pe in flux code for some other models. 2025-03-18 05:09:25 -04:00
comfyanonymous
50614f1b79 Fix regression with clip vision. 2025-03-17 13:56:11 -04:00
comfyanonymous
6dc7b0bfe3 Add support for giant dinov2 image encoder. 2025-03-17 05:53:54 -04:00
comfyanonymous
e8e990d6b8 Cleanup code. 2025-03-16 06:29:12 -04:00
Jedrzej Kosinski
2e24a15905
Call unpatch_hooks at the start of ModelPatcher.partially_unload (#7253)
* Call unpatch_hooks at the start of ModelPatcher.partially_unload

* Only call unpatch_hooks in partially_unload if lowvram is possible
2025-03-16 06:02:45 -04:00
chaObserv
fd5297131f
Guard the edge cases of noise term in er_sde (#7265) 2025-03-16 06:02:25 -04:00
comfyanonymous
55a1b09ddc Allow loading diffusion model files with the "Load Checkpoint" node. 2025-03-15 08:27:49 -04:00
comfyanonymous
3c3988df45 Show a better error message if the VAE is invalid. 2025-03-15 08:26:36 -04:00
comfyanonymous
a2448fc527 Remove useless code. 2025-03-14 18:10:37 -04:00
comfyanonymous
6a0daa79b6 Make the SkipLayerGuidanceDIT node work on WAN. 2025-03-14 10:55:19 -04:00
FeepingCreature
9c98c6358b
Tolerate missing @torch.library.custom_op (#7234)
This can happen on Pytorch versions older than 2.4.
2025-03-14 09:51:26 -04:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93 Fix. 2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed Print mac version. 2025-03-13 10:05:40 -04:00
Chenlei Hu
9b6cd9b874
[NodeDef] Add documentation on multi_select input option (#7212) 2025-03-12 17:29:39 -04:00
chaObserv
3fc688aebd
Ensure the extra_args in dpmpp sde series (#7204) 2025-03-12 17:28:59 -04:00
chaObserv
01015bff16
Add er_sde sampler (#7187) 2025-03-12 02:42:37 -04:00
comfyanonymous
ca8efab79f Support control loras on Wan. 2025-03-10 17:23:13 -04:00
comfyanonymous
9aac21f894 Fix issues with new hunyuan img2vid model and bumb version to v0.3.26 2025-03-09 05:07:22 -04:00