ComfyUI/comfy/ldm/modules
FeepingCreature 7aceb9f91c
Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
..
diffusionmodules Disable pytorch attention in VAE for AMD. 2025-02-14 05:42:14 -05:00
distributions Small optimizations. 2024-12-18 18:23:28 -05:00
encoders Make unclip more deterministic. 2024-01-14 17:28:31 -05:00
attention.py Add --use-flash-attention flag. (#7223) 2025-03-14 03:22:41 -04:00
ema.py Initial commit. 2023-01-16 22:37:14 -05:00
sub_quadratic_attention.py Fix and enforce all ruff W rules. 2025-01-01 03:08:33 -05:00
temporal_ae.py Basic Hunyuan Video model support. 2024-12-16 19:35:40 -05:00