mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-03-16 06:27:15 +00:00
![]() * Add --use-flash-attention flag. This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention. |
||
---|---|---|
.. | ||
diffusionmodules | ||
distributions | ||
encoders | ||
attention.py | ||
ema.py | ||
sub_quadratic_attention.py | ||
temporal_ae.py |