ComfyUI/comfy/ldm/modules
Raphael Walker 61b50720d0
Add support for attention masking in Flux (#5942)
* fix attention OOM in xformers

* allow passing attention mask in flux attention

* allow an attn_mask in flux

* attn masks can be done using replace patches instead of a separate dict

* fix return types

* fix return order

* enumerate

* patch the right keys

* arg names

* fix a silly bug

* fix xformers masks

* replace match with if, elif, else

* mask with image_ref_size

* remove unused import

* remove unused import 2

* fix pytorch/xformers attention

This corrects a weird inconsistency with skip_reshape.
It also allows masks of various shapes to be passed, which will be
automtically expanded (in a memory-efficient way) to a size that is
compatible with xformers or pytorch sdpa respectively.

* fix mask shapes
2024-12-16 18:21:17 -05:00
..
diffusionmodules Support conv3d in PatchEmbed. 2024-12-14 05:46:04 -05:00
distributions Initial commit. 2023-01-16 22:37:14 -05:00
encoders Make unclip more deterministic. 2024-01-14 17:28:31 -05:00
attention.py Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00
ema.py Initial commit. 2023-01-16 22:37:14 -05:00
sub_quadratic_attention.py Enforce all pyflake lint rules (#6033) 2024-12-12 19:29:37 -05:00
temporal_ae.py Lint unused import (#5973) 2024-12-09 15:24:39 -05:00