ComfyUI/comfy/ldm
Raphael Walker 61b50720d0
Add support for attention masking in Flux (#5942)
* fix attention OOM in xformers

* allow passing attention mask in flux attention

* allow an attn_mask in flux

* attn masks can be done using replace patches instead of a separate dict

* fix return types

* fix return order

* enumerate

* patch the right keys

* arg names

* fix a silly bug

* fix xformers masks

* replace match with if, elif, else

* mask with image_ref_size

* remove unused import

* remove unused import 2

* fix pytorch/xformers attention

This corrects a weird inconsistency with skip_reshape.
It also allows masks of various shapes to be passed, which will be
automtically expanded (in a memory-efficient way) to a size that is
compatible with xformers or pytorch sdpa respectively.

* fix mask shapes
2024-12-16 18:21:17 -05:00
..
audio Lint and fix undefined names (1/N) (#6028) 2024-12-12 18:55:26 -05:00
aura Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
cascade Lint unused import (#5973) 2024-12-09 15:24:39 -05:00
flux Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00
genmo Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
hydit Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
lightricks Lint unused import (#5973) 2024-12-09 15:24:39 -05:00
models Lint and fix undefined names (1/N) (#6028) 2024-12-12 18:55:26 -05:00
modules Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00
common_dit.py Make pad_to_patch_size function work on multi dim. 2024-12-13 07:22:05 -05:00
util.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00