Commit Graph

16 Commits

Author SHA1 Message Date
Chenlei Hu
563291ee51
Enforce all pyflake lint rules (#6033)
* Enforce F821 undefined-name

* Enforce all pyflake lint rules
2024-12-12 19:29:37 -05:00
Chenlei Hu
60749f345d
Lint and fix undefined names (3/N) (#6030) 2024-12-12 18:49:40 -05:00
comfyanonymous
2fd9c1308a Fix mask issue in some attention functions. 2024-11-22 02:10:09 -05:00
comfyanonymous
2a813c3b09 Switch some more prints to logging. 2024-03-11 16:34:58 -04:00
comfyanonymous
aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous
a373367b0c Fix some OOM issues with split and sub quad attention. 2023-10-25 20:17:28 -04:00
comfyanonymous
ddc6f12ad5 Disable autocast in unet for increased speed. 2023-07-05 21:58:29 -04:00
comfyanonymous
73c3e11e83 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous
3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2023-03-22 14:49:00 -04:00
edikius
165be5828a
Fixed import (#44)
* fixed import error

I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app

* Update main.py

* deleted example files
2023-03-06 11:41:40 -05:00
comfyanonymous
1a4edd19cd Fix overflow issue with inplace softmax. 2023-02-10 11:47:41 -05:00
comfyanonymous
df40d4f3bf torch.cuda.OutOfMemoryError is not present on older pytorch versions. 2023-02-09 12:33:27 -05:00
comfyanonymous
047775615b Lower the chances of an OOM. 2023-02-08 14:24:27 -05:00
comfyanonymous
1daccf3678 Run softmax in place if it OOMs. 2023-01-30 19:55:01 -05:00
comfyanonymous
051f472e8f Fix sub quadratic attention for SD2 and make it the default optimization. 2023-01-25 01:22:43 -05:00
comfyanonymous
220afe3310 Initial commit. 2023-01-16 22:37:14 -05:00