mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-04-18 18:33:30 +00:00

* Allow disabling pe in flux code for some other models. * Initial Hunyuan3Dv2 implementation. Supports the multiview, mini, turbo models and VAEs. * Fix orientation of hunyuan 3d model. * A few fixes for the hunyuan3d models. * Update frontend to 1.13 (#7331) * Add backend primitive nodes (#7328) * Add backend primitive nodes * Add control after generate to int primitive * Nodes to convert images to YUV and back. Can be used to convert an image to black and white. * Update frontend to 1.14 (#7343) * Native LotusD Implementation (#7125) * draft pass at a native comfy implementation of Lotus-D depth and normal est * fix model_sampling kludges * fix ruff --------- Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com> * Automatically set the right sampling type for lotus. * support output normal and lineart once (#7290) * [nit] Format error strings (#7345) * ComfyUI version v0.3.27 * Fallback to pytorch attention if sage attention fails. * Add model merging node for WAN 2.1 * Add Hunyuan3D to readme. * Support more float8 types. * Add CFGZeroStar node. Works on all models that use a negative prompt but is meant for rectified flow models. * Support the WAN 2.1 fun control models. Use the new WanFunControlToVideo node. * Add WanFunInpaintToVideo node for the Wan fun inpaint models. * Update frontend to 1.14.6 (#7416) Cherry-pick the fix: https://github.com/Comfy-Org/ComfyUI_frontend/pull/3252 * Don't error if wan concat image has extra channels. * ltxv: fix preprocessing exception when compression is 0. (#7431) * Remove useless code. * Fix latent composite node not working when source has alpha. * Fix alpha channel mismatch on destination in ImageCompositeMasked * Add option to store TE in bf16 (#7461) * User missing (#7439) * Ensuring a 401 error is returned when user data is not found in multi-user context. * Returning a 401 error when provided comfy-user does not exists on server side. * Fix comment. This function does not support quads. * MLU memory optimization (#7470) Co-authored-by: huzhan <huzhan@cambricon.com> * Fix alpha image issue in more nodes. * Fix problem. * Disable partial offloading of audio VAE. * Add activations_shape info in UNet models (#7482) * Add activations_shape info in UNet models * activations_shape should be a list * Support 512 siglip model. * Show a proper error to the user when a vision model file is invalid. * Support the wan fun reward loras. --------- Co-authored-by: comfyanonymous <comfyanonymous@protonmail.com> Co-authored-by: Chenlei Hu <hcl@comfy.org> Co-authored-by: thot experiment <94414189+thot-experiment@users.noreply.github.com> Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com> Co-authored-by: Terry Jia <terryjia88@gmail.com> Co-authored-by: Michael Kupchick <michael@lightricks.com> Co-authored-by: BVH <82035780+bvhari@users.noreply.github.com> Co-authored-by: Laurent Erignoux <lerignoux@gmail.com> Co-authored-by: BiologicalExplosion <49753622+BiologicalExplosion@users.noreply.github.com> Co-authored-by: huzhan <huzhan@cambricon.com> Co-authored-by: Raphael Walker <slickytail.mc@gmail.com>
46 lines
1.4 KiB
Python
46 lines
1.4 KiB
Python
import torch
|
|
|
|
# https://github.com/WeichenFan/CFG-Zero-star
|
|
def optimized_scale(positive, negative):
|
|
positive_flat = positive.reshape(positive.shape[0], -1)
|
|
negative_flat = negative.reshape(negative.shape[0], -1)
|
|
|
|
# Calculate dot production
|
|
dot_product = torch.sum(positive_flat * negative_flat, dim=1, keepdim=True)
|
|
|
|
# Squared norm of uncondition
|
|
squared_norm = torch.sum(negative_flat ** 2, dim=1, keepdim=True) + 1e-8
|
|
|
|
# st_star = v_cond^T * v_uncond / ||v_uncond||^2
|
|
st_star = dot_product / squared_norm
|
|
|
|
return st_star.reshape([positive.shape[0]] + [1] * (positive.ndim - 1))
|
|
|
|
class CFGZeroStar:
|
|
@classmethod
|
|
def INPUT_TYPES(s):
|
|
return {"required": {"model": ("MODEL",),
|
|
}}
|
|
RETURN_TYPES = ("MODEL",)
|
|
RETURN_NAMES = ("patched_model",)
|
|
FUNCTION = "patch"
|
|
CATEGORY = "advanced/guidance"
|
|
|
|
def patch(self, model):
|
|
m = model.clone()
|
|
def cfg_zero_star(args):
|
|
guidance_scale = args['cond_scale']
|
|
x = args['input']
|
|
cond_p = args['cond_denoised']
|
|
uncond_p = args['uncond_denoised']
|
|
out = args["denoised"]
|
|
alpha = optimized_scale(x - cond_p, x - uncond_p)
|
|
|
|
return out + uncond_p * (alpha - 1.0) + guidance_scale * uncond_p * (1.0 - alpha)
|
|
m.set_model_sampler_post_cfg_function(cfg_zero_star)
|
|
return (m, )
|
|
|
|
NODE_CLASS_MAPPINGS = {
|
|
"CFGZeroStar": CFGZeroStar
|
|
}
|