* Allow disabling pe in flux code for some other models.
* Initial Hunyuan3Dv2 implementation.
Supports the multiview, mini, turbo models and VAEs.
* Fix orientation of hunyuan 3d model.
* A few fixes for the hunyuan3d models.
* Update frontend to 1.13 (#7331)
* Add backend primitive nodes (#7328)
* Add backend primitive nodes
* Add control after generate to int primitive
* Nodes to convert images to YUV and back.
Can be used to convert an image to black and white.
* Update frontend to 1.14 (#7343)
* Native LotusD Implementation (#7125)
* draft pass at a native comfy implementation of Lotus-D depth and normal est
* fix model_sampling kludges
* fix ruff
---------
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
* Automatically set the right sampling type for lotus.
* support output normal and lineart once (#7290)
* [nit] Format error strings (#7345)
* ComfyUI version v0.3.27
* Fallback to pytorch attention if sage attention fails.
* Add model merging node for WAN 2.1
* Add Hunyuan3D to readme.
* Support more float8 types.
* Add CFGZeroStar node.
Works on all models that use a negative prompt but is meant for rectified
flow models.
* Support the WAN 2.1 fun control models.
Use the new WanFunControlToVideo node.
* Add WanFunInpaintToVideo node for the Wan fun inpaint models.
* Update frontend to 1.14.6 (#7416)
Cherry-pick the fix: https://github.com/Comfy-Org/ComfyUI_frontend/pull/3252
* Don't error if wan concat image has extra channels.
* ltxv: fix preprocessing exception when compression is 0. (#7431)
* Remove useless code.
* Fix latent composite node not working when source has alpha.
* Fix alpha channel mismatch on destination in ImageCompositeMasked
* Add option to store TE in bf16 (#7461)
* User missing (#7439)
* Ensuring a 401 error is returned when user data is not found in multi-user context.
* Returning a 401 error when provided comfy-user does not exists on server side.
* Fix comment.
This function does not support quads.
* MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
* Fix alpha image issue in more nodes.
* Fix problem.
* Disable partial offloading of audio VAE.
* Add activations_shape info in UNet models (#7482)
* Add activations_shape info in UNet models
* activations_shape should be a list
* Support 512 siglip model.
* Show a proper error to the user when a vision model file is invalid.
* Support the wan fun reward loras.
---------
Co-authored-by: comfyanonymous <comfyanonymous@protonmail.com>
Co-authored-by: Chenlei Hu <hcl@comfy.org>
Co-authored-by: thot experiment <94414189+thot-experiment@users.noreply.github.com>
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Co-authored-by: Terry Jia <terryjia88@gmail.com>
Co-authored-by: Michael Kupchick <michael@lightricks.com>
Co-authored-by: BVH <82035780+bvhari@users.noreply.github.com>
Co-authored-by: Laurent Erignoux <lerignoux@gmail.com>
Co-authored-by: BiologicalExplosion <49753622+BiologicalExplosion@users.noreply.github.com>
Co-authored-by: huzhan <huzhan@cambricon.com>
Co-authored-by: Raphael Walker <slickytail.mc@gmail.com>
This commit relaxes divisibility constraint for single-frame
conditionings. For single frames, the index can be arbitrary, while
multi-frame conditionings (>= 9 frames) must still be aligned to 8
frames.
Co-authored-by: Andrew Kvochko <a.kvochko@lightricks.com>
This patch fixes a bug in LTXVCropGuides when the latent has no
keyframes. Additionally, the first frame is always added as a keyframe.
Co-authored-by: Andrew Kvochko <a.kvochko@lightricks.com>
The frontend part isn't done yet so there is no video preview on the node
or dragging the webm on the interface to load the workflow yet.
This uses a new dependency: PyAV.
* Add 'sigmas' to transformer_options so that downstream code can know about the full scope of current sampling run, fix Hook Keyframes' guarantee_steps=1 inconsistent behavior with sampling split across different Sampling nodes/sampling runs by referencing 'sigmas'
* Cleaned up hooks.py, refactored Hook.should_register and add_hook_patches to use target_dict instead of target so that more information can be provided about the current execution environment if needed
* Refactor WrapperHook into TransformerOptionsHook, as there is no need to separate out Wrappers/Callbacks/Patches into different hook types (all affect transformer_options)
* Refactored HookGroup to also store a dictionary of hooks separated by hook_type, modified necessary code to no longer need to manually separate out hooks by hook_type
* In inner_sample, change "sigmas" to "sampler_sigmas" in transformer_options to not conflict with the "sigmas" that will overwrite "sigmas" in _calc_cond_batch
* Refactored 'registered' to be HookGroup instead of a list of Hooks, made AddModelsHook operational and compliant with should_register result, moved TransformerOptionsHook handling out of ModelPatcher.register_all_hook_patches, support patches in TransformerOptionsHook properly by casting any patches/wrappers/hooks to proper device at sample time
* Made hook clone code sane, made clear ObjectPatchHook and SetInjectionsHook are not yet operational
* Fix performance of hooks when hooks are appended via Cond Pair Set Props nodes by properly caching between positive and negative conds, make hook_patches_backup behave as intended (in the case that something pre-registers WeightHooks on the ModelPatcher instead of registering it at sample time)
* Filter only registered hooks on self.conds in CFGGuider.sample
* Make hook_scope functional for TransformerOptionsHook
* removed 4 whitespace lines to satisfy Ruff,
* Add a get_injections function to ModelPatcher
* Made TransformerOptionsHook contribute to registered hooks properly, added some doc strings and removed a so-far unused variable
* Rename AddModelsHooks to AdditionalModelsHook, rename SetInjectionsHook to InjectionsHook (not yet implemented, but at least getting the naming figured out)
* Clean up a typehint
The 10 step minimum for the AYS scheduler is pointless, it works well at lower steps, like 8 steps, or even 4 steps.
For example with LCM or DMD2.
Example here: https://i.ibb.co/56CSPMj/image.png