comfyanonymous
2fd9c1308a
Fix mask issue in some attention functions.
2024-11-22 02:10:09 -05:00
comfyanonymous
8f0009aad0
Support new flux model variants.
2024-11-21 08:38:23 -05:00
comfyanonymous
07f6eeaa13
Fix mask issue with attention_xformers.
2024-11-20 17:07:46 -05:00
comfyanonymous
22535d0589
Skip layer guidance now works on stable audio model.
2024-11-20 07:33:06 -05:00
comfyanonymous
d9f90965c8
Support block replace patches in auraflow.
2024-11-17 08:19:59 -05:00
comfyanonymous
41886af138
Add transformer options blocks replace patch to mochi.
2024-11-16 20:48:14 -05:00
comfyanonymous
8ebf2d8831
Add block replace transformer_options to flux.
2024-11-12 08:00:39 -05:00
contentis
69694f40b3
fix dynamic shape export ( #5490 )
2024-11-04 14:59:28 -05:00
comfyanonymous
fabf449feb
Mochi VAE encoder.
2024-11-01 17:33:09 -04:00
comfyanonymous
30c0c81351
Add a way to patch blocks in SD3.
2024-10-29 00:48:32 -04:00
comfyanonymous
13b0ff8a6f
Update SD3 code.
2024-10-28 21:58:52 -04:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
contentis
5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
comfyanonymous
d854ed0bcf
Allow using SD3 type te output on flux model.
2024-10-03 09:44:54 -04:00
City
8733191563
Flux torch.compile fix ( #5082 )
2024-09-27 22:07:51 -04:00
comfyanonymous
cf80d28689
Support loading controlnets with different input.
2024-09-13 09:54:37 -04:00
comfyanonymous
c27ebeb1c2
Fix onnx export not working on flux.
2024-09-06 03:21:52 -04:00
comfyanonymous
5cbaa9e07c
Mistoline flux controlnet support.
2024-09-05 00:05:17 -04:00
Jedrzej Kosinski
f04229b84d
Add emb_patch support to UNetModel forward ( #4779 )
2024-09-04 14:35:15 -04:00
comfyanonymous
63fafaef45
Fix potential issue with hydit controlnets.
2024-08-30 04:58:41 -04:00
comfyanonymous
10a79e9898
Implement model part of flux union controlnet.
2024-08-29 18:41:22 -04:00
comfyanonymous
ea3f39bd69
InstantX depth flux controlnet.
2024-08-29 02:14:19 -04:00
comfyanonymous
b33cd61070
InstantX canny controlnet.
2024-08-28 19:02:50 -04:00
comfyanonymous
d31e226650
Unify RMSNorm code.
2024-08-28 16:56:38 -04:00
comfyanonymous
ab130001a8
Do RMSNorm in native type.
2024-08-27 02:41:56 -04:00
comfyanonymous
015f73dc49
Try a different type of flux fp16 fix.
2024-08-21 16:17:15 -04:00
comfyanonymous
d31df04c8a
Indentation.
2024-08-17 23:00:44 -04:00
Xrvk
e68763f40c
Add Flux model support for InstantX style controlnet residuals ( #4444 )
...
* Add Flux model support for InstantX style controlnet residuals
* Refactor Flux controlnet residual step to a separate method
* Rollback minor change
* New format for applying controlnet residuals: input->double_blocks, output->single_blocks
* Adjust XLabs Flux controlnet to fit new syntax of applying Flux controlnet residuals
* Remove unnecessary import and minor style change
2024-08-17 22:58:23 -04:00
comfyanonymous
33fb282d5c
Fix issue.
2024-08-14 02:51:47 -04:00
comfyanonymous
a5af64d3ce
Revert "Not sure if this actually changes anything but it can't hurt."
...
This reverts commit 34608de2e9
.
2024-08-14 01:05:17 -04:00
comfyanonymous
34608de2e9
Not sure if this actually changes anything but it can't hurt.
2024-08-13 13:29:16 -04:00
comfyanonymous
5942c17d55
Order of operations matters.
2024-08-12 21:56:18 -04:00
comfyanonymous
c032b11e07
xlabs Flux controlnet implementation. ( #4260 )
...
* xlabs Flux controlnet.
* Fix not working on old python.
* Remove comment.
2024-08-12 21:22:22 -04:00
comfyanonymous
ae197f651b
Speed up hunyuan dit inference a bit.
2024-08-10 07:36:27 -04:00
comfyanonymous
a475ec2300
Cleanup HunyuanDit controlnets.
...
Use the: ControlNetApply SD3 and HunyuanDiT node.
2024-08-09 02:59:34 -04:00
来新璐
06eb9fb426
feat: add support for HunYuanDit ControlNet ( #4245 )
...
* add support for HunYuanDit ControlNet
* fix hunyuandit controlnet
* fix typo in hunyuandit controlnet
* fix typo in hunyuandit controlnet
* fix code format style
* add control_weight support for HunyuanDit Controlnet
* use control_weights in HunyuanDit Controlnet
* fix typo
2024-08-09 02:59:24 -04:00
comfyanonymous
413322645e
Raw torch is faster than einops?
2024-08-08 22:09:29 -04:00
comfyanonymous
11200de970
Cleaner code.
2024-08-08 20:07:09 -04:00
comfyanonymous
8115d8cce9
Add Flux fp16 support hack.
2024-08-07 15:08:39 -04:00
a-One-Fan
a178e25912
Fix Flux FP64 math on XPU ( #4210 )
2024-08-05 01:26:20 -04:00
comfyanonymous
3b71f84b50
ONNX tracing fixes.
2024-08-04 15:45:43 -04:00
comfyanonymous
e638f2858a
Hack to make all resolutions work on Flux models.
2024-08-01 21:39:18 -04:00
comfyanonymous
48eb1399c0
Try to fix mac issue.
2024-08-01 13:41:27 -04:00
comfyanonymous
f2b80f95d2
Better Mac support on flux model.
2024-08-01 13:10:50 -04:00
comfyanonymous
8d34211a7a
Fix old python versions no longer working.
2024-08-01 09:57:20 -04:00
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
2024-08-01 09:49:29 -04:00
comfyanonymous
b85216a3c0
Lower T5 memory usage by a few hundred MB.
2024-07-31 00:52:34 -04:00
comfyanonymous
25853d0be8
Use common function for casting weights to input.
2024-07-30 10:49:14 -04:00
comfyanonymous
79040635da
Remove unnecessary code.
2024-07-30 05:01:34 -04:00
comfyanonymous
66d35c07ce
Improve artifacts on hydit, auraflow and SD3 on specific resolutions.
...
This breaks seeds for resolutions that are not a multiple of 16 in pixel
resolution by using circular padding instead of reflection padding but
should lower the amount of artifacts when doing img2img at those
resolutions.
2024-07-29 20:48:50 -04:00
comfyanonymous
8328a2d8cd
Let hunyuan dit work with all prompt lengths.
2024-07-26 12:11:32 -04:00
comfyanonymous
afe732bef9
Hunyuan dit can now accept longer prompts.
2024-07-26 11:52:58 -04:00
comfyanonymous
a5f4292f9f
Basic hunyuan dit implementation. ( #4102 )
...
* Let tokenizers return weights to be stored in the saved checkpoint.
* Basic hunyuan dit implementation.
* Fix some resolutions not working.
* Support hydit checkpoint save.
* Init with right dtype.
* Switch to optimized attention in pooler.
* Fix black images on hunyuan dit.
2024-07-25 18:21:08 -04:00
comfyanonymous
10b43ceea5
Remove duplicate code.
2024-07-24 01:12:59 -04:00
comfyanonymous
9f291d75b3
AuraFlow model implementation.
2024-07-11 16:52:26 -04:00
comfyanonymous
f8f7568d03
Basic SD3 controlnet implementation.
...
Still missing the node to properly use it.
2024-06-27 18:43:11 -04:00
comfyanonymous
66aaa14001
Controlnet refactor.
2024-06-27 18:43:11 -04:00
comfyanonymous
8ddc151a4c
Squash depreciation warning on new pytorch.
2024-06-16 13:06:23 -04:00
comfyanonymous
bb1969cab7
Initial support for the stable audio open model.
2024-06-15 12:14:56 -04:00
comfyanonymous
1281f933c1
Small optimization.
2024-06-15 02:44:38 -04:00
comfyanonymous
605e64f6d3
Fix lowvram issue.
2024-06-12 10:39:33 -04:00
Dango233
73ce178021
Remove redundancy in mmdit.py ( #3685 )
2024-06-11 06:30:25 -04:00
comfyanonymous
8c4a9befa7
SD3 Support.
2024-06-10 14:06:23 -04:00
comfyanonymous
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
comfyanonymous
83d969e397
Disable xformers when tracing model.
2024-05-21 13:55:49 -04:00
comfyanonymous
1900e5119f
Fix potential issue.
2024-05-20 08:19:54 -04:00
comfyanonymous
0bdc2b15c7
Cleanup.
2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
comfyanonymous
46daf0a9a7
Add debug options to force on and off attention upcasting.
2024-05-16 04:09:41 -04:00
comfyanonymous
ec6f16adb6
Fix SAG.
2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c
Refactor attention upcasting code part 1.
2024-05-14 12:47:31 -04:00
comfyanonymous
2aed53c4ac
Workaround xformers bug.
2024-04-30 21:23:40 -04:00
comfyanonymous
d7897fff2c
Move cascade scale factor from stage_a to latent_formats.py
2024-03-16 14:49:35 -04:00
comfyanonymous
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
comfyanonymous
5f60ee246e
Support loading the sr cascade controlnet.
2024-03-07 01:22:48 -05:00
comfyanonymous
03e6e81629
Set upscale algorithm to bilinear for stable cascade controlnet.
2024-03-06 02:59:40 -05:00
comfyanonymous
03e83bb5d0
Support stable cascade canny controlnet.
2024-03-06 02:25:42 -05:00
comfyanonymous
cb7c3a2921
Allow image_only_indicator to be None.
2024-02-29 13:11:30 -05:00
comfyanonymous
b3e97fc714
Koala 700M and 1B support.
...
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous
e93cdd0ad0
Remove print.
2024-02-19 11:47:26 -05:00
comfyanonymous
a7b5eaa7e3
Forgot to commit this.
2024-02-19 04:25:46 -05:00
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
2024-02-17 16:15:18 -05:00
comfyanonymous
11e3221f1f
fp8 weight support for Stable Cascade.
2024-02-17 15:27:31 -05:00
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
2024-02-17 12:13:13 -05:00
comfyanonymous
805c36ac9c
Make Stable Cascade work on old pytorch 2.0
2024-02-17 00:42:30 -05:00
comfyanonymous
667c92814e
Stable Cascade Stage B.
2024-02-16 13:02:03 -05:00
comfyanonymous
f83109f09b
Stable Cascade Stage C.
2024-02-16 10:55:08 -05:00
comfyanonymous
5e06baf112
Stable Cascade Stage A.
2024-02-16 06:30:39 -05:00
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
2024-02-07 18:52:51 -05:00
comfyanonymous
89507f8adf
Remove some unused imports.
2024-01-25 23:42:37 -05:00
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
2024-01-03 14:27:11 -05:00