Commit Graph

385 Commits

Author SHA1 Message Date
comfyanonymous
5c38958e49 Tweak lowvram model memory so it's closer to what it was before. 2023-06-01 04:04:35 -04:00
comfyanonymous
94680732d3 Empty cache on mps. 2023-06-01 03:52:51 -04:00
comfyanonymous
03da8a3426 This is useless for inference. 2023-05-31 13:03:24 -04:00
comfyanonymous
eb448dd8e1 Auto load model in lowvram if not enough memory. 2023-05-30 12:36:41 -04:00
comfyanonymous
b9818eb910 Add route to get safetensors metadata:
/view_metadata/loras?filename=lora.safetensors
2023-05-29 02:48:50 -04:00
comfyanonymous
a532888846 Support VAEs in diffusers format. 2023-05-28 02:02:09 -04:00
comfyanonymous
0fc483dcfd Refactor diffusers model convert code to be able to reuse it. 2023-05-28 01:55:40 -04:00
comfyanonymous
eb4bd7711a Remove einops. 2023-05-25 18:42:56 -04:00
comfyanonymous
87ab25fac7 Do operations in same order as the one it replaces. 2023-05-25 18:31:27 -04:00
comfyanonymous
2b1fac9708 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-05-25 14:44:16 -04:00
comfyanonymous
e1278fa925 Support old pytorch versions that don't have weights_only. 2023-05-25 13:30:59 -04:00
BlenderNeko
8b4b0c3188 vecorized bislerp 2023-05-25 19:23:47 +02:00
comfyanonymous
b8ccbec6d8 Various improvements to bislerp. 2023-05-23 11:40:24 -04:00
comfyanonymous
34887b8885 Add experimental bislerp algorithm for latent upscaling.
It's like bilinear but with slerp.
2023-05-23 03:12:56 -04:00
comfyanonymous
6cc450579b Auto transpose images from exif data. 2023-05-22 00:22:24 -04:00
comfyanonymous
dc198650c0 sample_dpmpp_2m_sde no longer crashes when step == 1. 2023-05-21 11:34:29 -04:00
comfyanonymous
069657fbf3 Add DPM-Solver++(2M) SDE and exponential scheduler.
exponential scheduler is the one recommended with this sampler.
2023-05-21 01:46:03 -04:00
comfyanonymous
b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2023-05-20 16:01:02 -04:00
comfyanonymous
797c4e8d3b Simplify and improve some vae attention code. 2023-05-20 15:07:21 -04:00
comfyanonymous
ef815ba1e2 Switch default scheduler to normal. 2023-05-15 00:29:56 -04:00
comfyanonymous
68d12b530e Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI 2023-05-14 15:39:39 -04:00
comfyanonymous
3a1f47764d Print the torch device that is used on startup. 2023-05-13 17:11:27 -04:00
BlenderNeko
1201d2eae5
Make nodes map over input lists (#579)
* allow nodes to map over lists

* make work with IS_CHANGED and VALIDATE_INPUTS

* give list outputs distinct socket shape

* add rebatch node

* add batch index logic

* add repeat latent batch

* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
BlenderNeko
19c014f429 comment out annoying print statement 2023-05-12 23:57:40 +02:00
BlenderNeko
d9e088ddfd minor changes for tiled sampler 2023-05-12 23:49:09 +02:00
comfyanonymous
f7c0f75d1f Auto batching improvements.
Try batching when cond sizes don't match with smart padding.
2023-05-10 13:59:24 -04:00
comfyanonymous
314e526c5c Not needed anymore because sampling works with any latent size. 2023-05-09 12:18:18 -04:00
comfyanonymous
c6e34963e4 Make t2i adapter work with any latent resolution. 2023-05-08 18:15:19 -04:00
comfyanonymous
a1f12e370d Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI 2023-05-07 17:19:03 -04:00
comfyanonymous
6fc4917634 Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
2023-05-06 19:58:54 -04:00
comfyanonymous
678f933d38 maximum_batch_area for xformers.
Remove useless code.
2023-05-06 19:28:46 -04:00
EllangoK
8e03c789a2 auto-launch cli arg 2023-05-06 16:59:40 -04:00
comfyanonymous
cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2023-05-05 18:11:41 -04:00
comfyanonymous
af9cc1fb6a Search recursively in subfolders for embeddings. 2023-05-05 01:28:48 -04:00
comfyanonymous
6ee11d7bc0 Fix import. 2023-05-05 00:19:35 -04:00
comfyanonymous
bae4fb4a9d Fix imports. 2023-05-04 18:10:29 -04:00
comfyanonymous
fcf513e0b6 Refactor. 2023-05-03 17:48:35 -04:00
comfyanonymous
a74e176a24 Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI 2023-05-03 16:24:56 -04:00
pythongosssss
5eeecf3fd5 remove unused import 2023-05-03 18:21:23 +01:00
pythongosssss
8912623ea9 use comfy progress bar 2023-05-03 18:19:22 +01:00
comfyanonymous
908dc1d5a8 Add a total_steps value to sampler callback. 2023-05-03 12:58:10 -04:00
pythongosssss
fdf57325f4 Merge remote-tracking branch 'origin/master' into tiled-progress 2023-05-03 17:33:42 +01:00
pythongosssss
27df74101e reduce duplication 2023-05-03 17:33:19 +01:00
comfyanonymous
93c64afaa9 Use sampler callback instead of tqdm hook for progress bar. 2023-05-02 23:00:49 -04:00
pythongosssss
06ad35b493 added progress to encode + upscale 2023-05-02 19:18:07 +01:00
comfyanonymous
ba8a4c3667 Change latent resolution step to 8. 2023-05-02 14:17:51 -04:00
comfyanonymous
66c8aa5c3e Make unet work with any input shape. 2023-05-02 13:31:43 -04:00
comfyanonymous
9c335a553f LoKR support. 2023-05-01 18:18:23 -04:00
comfyanonymous
d3293c8339 Properly disable all progress bars when disable_pbar=True 2023-05-01 15:52:17 -04:00
BlenderNeko
a2e18b1504 allow disabling of progress bar when sampling 2023-04-30 18:59:58 +02:00
comfyanonymous
071011aebe Mask strength should be separate from area strength. 2023-04-29 20:06:53 -04:00
comfyanonymous
870fae62e7 Merge branch 'condition_by_mask_node' of https://github.com/guill/ComfyUI 2023-04-29 15:05:18 -04:00
Jacob Segal
af02393c2a Default to sampling entire image
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.

I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
   used to index into tensors.
2023-04-29 00:16:58 -07:00
comfyanonymous
056e5545ff Don't try to get vram from xpu or cuda when directml is enabled. 2023-04-29 00:28:48 -04:00
comfyanonymous
2ca934f7d4 You can now select the device index with: --directml id
Like this for example: --directml 1
2023-04-28 16:51:35 -04:00
comfyanonymous
3baded9892 Basic torch_directml support. Use --directml to use it. 2023-04-28 14:28:57 -04:00
Jacob Segal
e214c917ae Add Condition by Mask node
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
2023-04-27 20:03:27 -07:00
comfyanonymous
5a971cecdb Add callback to sampler function.
Callback format is: callback(step, x0, x)
2023-04-27 04:38:44 -04:00
comfyanonymous
aa57136dae Some fixes to the batch masks PR. 2023-04-25 01:12:40 -04:00
comfyanonymous
c50208a703 Refactor more code to sample.py 2023-04-24 23:25:51 -04:00
comfyanonymous
7983b3a975 This is cleaner this way. 2023-04-24 22:45:35 -04:00
BlenderNeko
0b07b2cc0f gligen tuple 2023-04-24 21:47:57 +02:00
pythongosssss
c8c9926eeb Add progress to vae decode tiled 2023-04-24 11:55:44 +01:00
BlenderNeko
d9b1595f85 made sample functions more explicit 2023-04-24 12:53:10 +02:00
BlenderNeko
5818539743 add docstrings 2023-04-23 20:09:09 +02:00
BlenderNeko
8d2de420d3 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-23 20:02:18 +02:00
BlenderNeko
2a09e2aa27 refactor/split various bits of code for sampling 2023-04-23 20:02:08 +02:00
comfyanonymous
5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
6908f9c949 This makes pytorch2.0 attention perform a bit faster. 2023-04-22 14:30:39 -04:00
comfyanonymous
907010e082 Remove some useless code. 2023-04-20 23:58:25 -04:00
comfyanonymous
96b57a9ad6 Don't pass adm to model when it doesn't support it. 2023-04-19 21:11:38 -04:00
comfyanonymous
3696d1699a Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
comfyanonymous
884ea653c8 Add a way for nodes to set a custom CFG function. 2023-04-17 11:05:15 -04:00
comfyanonymous
73c3e11e83 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous
81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2023-04-15 18:46:58 -04:00
comfyanonymous
719c26c3c9 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-15 14:16:50 -04:00
BlenderNeko
d0b1b6c6bf fixed improper padding 2023-04-15 19:38:21 +02:00
comfyanonymous
deb2b93e79 Move code to empty gpu cache to model_management.py 2023-04-15 11:19:07 -04:00
comfyanonymous
04d9bc13af Safely load pickled embeds that don't load with weights_only=True. 2023-04-14 15:33:43 -04:00
BlenderNeko
da115bd78d ensure backwards compat with optional args 2023-04-14 21:16:55 +02:00
BlenderNeko
752f7a162b align behavior with old tokenize function 2023-04-14 21:02:45 +02:00
comfyanonymous
334aab05e5 Don't stop workflow if loading embedding fails. 2023-04-14 13:54:00 -04:00
BlenderNeko
73175cf58c split tokenizer from encoder 2023-04-13 22:06:50 +02:00
BlenderNeko
8489cba140 add unique ID per word/embedding for tokenizer 2023-04-13 22:01:01 +02:00
comfyanonymous
92eca60ec9 Fix for new transformers version. 2023-04-09 15:55:21 -04:00
comfyanonymous
1e1875f674 Print xformers version and warning about 0.0.18 2023-04-09 01:31:47 -04:00
comfyanonymous
7e254d2f69 Clarify what --windows-standalone-build does. 2023-04-07 15:52:56 -04:00
comfyanonymous
44fea05064 Cleanup. 2023-04-07 02:31:46 -04:00
comfyanonymous
58ed0f2da4 Fix loading SD1.5 diffusers checkpoint. 2023-04-07 01:30:33 -04:00
comfyanonymous
8b9ac8fedb Merge branch 'master' of https://github.com/sALTaccount/ComfyUI 2023-04-07 01:03:43 -04:00
comfyanonymous
64557d6781 Add a --force-fp32 argument to force fp32 for debugging. 2023-04-07 00:27:54 -04:00
comfyanonymous
bceccca0e5 Small refactor. 2023-04-06 23:53:54 -04:00
comfyanonymous
28a7205739 Merge branch 'ipex' of https://github.com/kwaa/ComfyUI-IPEX 2023-04-06 23:45:29 -04:00
藍+85CD
05eeaa2de5
Merge branch 'master' into ipex 2023-04-07 09:11:30 +08:00
EllangoK
28fff5d1db fixes lack of support for multi configs
also adds some metavars to argarse
2023-04-06 19:06:39 -04:00
comfyanonymous
f84f2508cc Rename the cors parameter to something more verbose. 2023-04-06 15:24:55 -04:00
EllangoK
48efae1608 makes cors a cli parameter 2023-04-06 15:06:22 -04:00
EllangoK
01c1fc669f set listen flag to listen on all if specifed 2023-04-06 13:19:00 -04:00
藍+85CD
3e2608e12b Fix auto lowvram detection on CUDA 2023-04-06 15:44:05 +08:00
sALTaccount
60127a8304 diffusers loader 2023-04-05 23:57:31 -07:00
藍+85CD
7cb924f684 Use separate variables instead of vram_state 2023-04-06 14:24:47 +08:00
藍+85CD
84b9c0ac2f Import intel_extension_for_pytorch as ipex 2023-04-06 12:27:22 +08:00
EllangoK
e5e587b1c0 seperates out arg parser and imports args 2023-04-05 23:41:23 -04:00
藍+85CD
37713e3b0a Add basic XPU device support
closed #387
2023-04-05 21:22:14 +08:00
comfyanonymous
e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2023-04-04 22:22:02 -04:00
comfyanonymous
1718730e80 Ignore embeddings when sizes don't match and print a WARNING. 2023-04-04 11:49:29 -04:00
comfyanonymous
23524ad8c5 Remove print. 2023-04-03 22:58:54 -04:00
comfyanonymous
539ff487a8 Pull latest tomesd code from upstream. 2023-04-03 15:49:28 -04:00
comfyanonymous
f50b1fec69 Add noise augmentation setting to unCLIPConditioning. 2023-04-03 13:50:29 -04:00
comfyanonymous
809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
0d972b85e6 This seems to give better quality in tome. 2023-03-31 18:36:18 -04:00
comfyanonymous
18a6c1db33 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00
comfyanonymous
61ec3c9d5d Add a way to pass options to the transformers blocks. 2023-03-31 13:04:39 -04:00
comfyanonymous
afd65d3819 Fix noise mask not working with > 1 batch size on ksamplers. 2023-03-30 03:50:12 -04:00
comfyanonymous
b2554bc4dd Split VAE decode batches depending on free memory. 2023-03-29 02:24:37 -04:00
comfyanonymous
0d65cb17b7 Fix ddim_uniform crashing with 37 steps. 2023-03-28 16:29:35 -04:00
Francesco Yoshi Gobbo
f55755f0d2
code cleanup 2023-03-27 06:48:09 +02:00
Francesco Yoshi Gobbo
cf0098d539
no lowvram state if cpu only 2023-03-27 04:51:18 +02:00
comfyanonymous
f5365c9c81 Fix ddim for Mac: #264 2023-03-26 00:36:54 -04:00
comfyanonymous
4adcea7228 I don't think controlnets were being handled correctly by MPS. 2023-03-24 14:33:16 -04:00
comfyanonymous
3c6ff8821c Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI 2023-03-24 13:56:43 -04:00
Yurii Mazurevich
fc71e7ea08 Fixed typo 2023-03-24 19:39:55 +02:00
comfyanonymous
7f0fd99b5d Make ddim work with --cpu 2023-03-24 11:39:51 -04:00
Yurii Mazurevich
4b943d2b60 Removed unnecessary comment 2023-03-24 14:15:30 +02:00
Yurii Mazurevich
89fd5ed574 Added MPS device support 2023-03-24 14:12:56 +02:00
comfyanonymous
dd095efc2c Support loha that use cp decomposition. 2023-03-23 04:32:25 -04:00
comfyanonymous
94a7c895f4 Add loha support. 2023-03-23 03:40:12 -04:00
comfyanonymous
3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2023-03-22 14:49:00 -04:00
comfyanonymous
4039616ca6 Less seams in tiled outputs at the cost of more processing. 2023-03-22 03:29:09 -04:00
comfyanonymous
c692509c2b Try to improve VAEEncode memory usage a bit. 2023-03-22 02:45:18 -04:00
comfyanonymous
9d0665c8d0 Add laptop quadro cards to fp32 list. 2023-03-21 16:57:35 -04:00
comfyanonymous
cc309568e1 Add support for locon mid weights. 2023-03-21 14:51:51 -04:00
comfyanonymous
edfc4ca663 Try to fix a vram issue with controlnets. 2023-03-19 10:50:38 -04:00
comfyanonymous
b4b21be707 Fix area composition feathering not working properly. 2023-03-19 02:00:52 -04:00
comfyanonymous
50099bcd96 Support multiple paths for embeddings. 2023-03-18 03:08:43 -04:00
comfyanonymous
2e73367f45 Merge T2IAdapterLoader and ControlNetLoader.
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous
ee46bef03a Make --cpu have priority over everything else. 2023-03-13 21:30:01 -04:00
comfyanonymous
0e836d525e use half() on fp16 models loaded with config. 2023-03-13 21:12:48 -04:00
comfyanonymous
986dd820dc Use half() function on model when loading in fp16. 2023-03-13 20:58:09 -04:00
comfyanonymous
54dbfaf2ec Remove omegaconf dependency and some ci changes. 2023-03-13 14:49:18 -04:00
comfyanonymous
83f23f82b8 Add pytorch attention support to VAE. 2023-03-13 12:45:54 -04:00
comfyanonymous
a256a2abde --disable-xformers should not even try to import xformers. 2023-03-13 11:36:48 -04:00
comfyanonymous
0f3ba7482f Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
e33dc2b33b Add a VAEEncodeTiled node. 2023-03-11 15:28:15 -05:00
comfyanonymous
1de86851b1 Try to fix memory issue. 2023-03-11 15:15:13 -05:00
comfyanonymous
2b1fce2943 Make tiled_scale work for downscaling. 2023-03-11 14:58:55 -05:00
comfyanonymous
9db2e97b47 Tiled upscaling with the upscale models. 2023-03-11 14:04:13 -05:00
comfyanonymous
cd64111c83 Add locon support. 2023-03-09 21:41:24 -05:00
comfyanonymous
c70f0ac64b SD2.x controlnets now work. 2023-03-08 01:13:38 -05:00
comfyanonymous
19415c3ace Relative imports to test something. 2023-03-07 11:00:35 -05:00
edikius
165be5828a
Fixed import (#44)
* fixed import error

I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app

* Update main.py

* deleted example files
2023-03-06 11:41:40 -05:00
comfyanonymous
501f19eec6 Fix clip_skip no longer being loaded from yaml file. 2023-03-06 11:34:02 -05:00
comfyanonymous
afff30fc0a Add --cpu to use the cpu for inference. 2023-03-06 10:50:50 -05:00
comfyanonymous
47acb3d73e Implement support for t2i style model.
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.

Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models

StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00
comfyanonymous
cc8baf1080 Make VAE use common function to get free memory. 2023-03-05 14:20:07 -05:00
comfyanonymous
798c90e1c0 Fix pytorch 2.0 cross attention not working. 2023-03-05 14:14:54 -05:00
comfyanonymous
16130c7546 Add support for new colour T2I adapter model. 2023-03-03 19:13:07 -05:00
comfyanonymous
9d00235b41 Update T2I adapter code to latest. 2023-03-03 18:46:49 -05:00
comfyanonymous
ebfcf0a9c9 Fix issue. 2023-03-03 13:18:01 -05:00
comfyanonymous
4215206281 Add a node to set CLIP skip.
Use a more simple way to detect if the model is -v prediction.
2023-03-03 13:04:36 -05:00
comfyanonymous
fed315a76a To be really simple CheckpointLoaderSimple should pick the right type. 2023-03-03 11:07:10 -05:00
comfyanonymous
94bb0375b0 New CheckpointLoaderSimple to load checkpoints without a config. 2023-03-03 03:37:35 -05:00
comfyanonymous
c1f5855ac1 Make some cross attention functions work on the CPU. 2023-03-03 03:27:33 -05:00
comfyanonymous
1a612e1c74 Add some pytorch scaled_dot_product_attention code for testing.
--use-pytorch-cross-attention to use it.
2023-03-02 17:01:20 -05:00
comfyanonymous
69cc75fbf8 Add a way to interrupt current processing in the backend. 2023-03-02 14:42:03 -05:00
comfyanonymous
9502ee45c3 Hopefully fix a strange issue with xformers + lowvram. 2023-02-28 13:48:52 -05:00
comfyanonymous
b31daadc03 Try to improve memory issues with del. 2023-02-28 12:27:43 -05:00
comfyanonymous
2c5f0ec681 Small adjustment. 2023-02-27 20:04:18 -05:00
comfyanonymous
86721d5158 Enable highvram automatically when vram >> ram 2023-02-27 19:57:39 -05:00
comfyanonymous
75fa162531 Remove sample_ from some sampler names.
Old workflows will still work.
2023-02-27 01:43:06 -05:00
comfyanonymous
9f4214e534 Preparing to add another function to load checkpoints. 2023-02-26 17:29:01 -05:00
comfyanonymous
3cd7d84b53 Fix uni_pc sampler not working with 1 or 2 steps. 2023-02-26 04:01:01 -05:00
comfyanonymous
dfb397e034 Fix multiple controlnets not working. 2023-02-25 22:12:22 -05:00
comfyanonymous
af3cc1b5fb Fixed issue when batched image was used as a controlnet input. 2023-02-25 14:57:28 -05:00
comfyanonymous
d2da346b0b Fix missing variable. 2023-02-25 12:19:03 -05:00
comfyanonymous
4e6b83a80a Add a T2IAdapterLoader node to load T2I-Adapter models.
They are loaded as CONTROL_NET objects because they are similar.
2023-02-25 01:24:56 -05:00
comfyanonymous
fcb25d37db Prepare for t2i adapter. 2023-02-24 23:36:17 -05:00
comfyanonymous
cf5a211efc Remove some useless imports 2023-02-24 12:36:55 -05:00
comfyanonymous
87b00b37f6 Added an experimental VAEDecodeTiled.
This decodes the image with the VAE in tiles which should be faster and
use less vram.

It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2023-02-24 02:10:10 -05:00
comfyanonymous
62df8dd62a Add a node to load diff controlnets. 2023-02-22 23:22:03 -05:00
comfyanonymous
f04dc2c2f4 Implement DDIM sampler. 2023-02-22 21:10:19 -05:00
comfyanonymous
2976c1ad28 Uni_PC: make max denoise behave more like other samplers.
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2023-02-22 02:21:06 -05:00
comfyanonymous
c9daec4c89 Remove prints that are useless when xformers is enabled. 2023-02-21 22:16:13 -05:00
comfyanonymous
a7328e4945 Add uni_pc bh2 variant. 2023-02-21 16:11:48 -05:00
comfyanonymous
d80af7ca30 ControlNetApply now stacks.
It can be used to apply multiple control nets at the same time.
2023-02-21 01:18:53 -05:00
comfyanonymous
00a9189e30 Support old pytorch. 2023-02-19 16:59:03 -05:00
comfyanonymous
137ae2606c Support people putting commas after the embedding name in the prompt. 2023-02-19 02:50:48 -05:00
comfyanonymous
2326ff1263 Add: --highvram for when you want models to stay on the vram. 2023-02-17 21:27:02 -05:00
comfyanonymous
09f1d76ed8 Fix an OOM issue. 2023-02-17 16:21:01 -05:00
comfyanonymous
d66415c021 Low vram mode for controlnets. 2023-02-17 15:48:16 -05:00
comfyanonymous
220a72d36b Use fp16 for fp16 control nets. 2023-02-17 15:31:38 -05:00
comfyanonymous
6135a21ee8 Add a way to control controlnet strength. 2023-02-16 18:08:01 -05:00
comfyanonymous
4efa67fa12 Add ControlNet support. 2023-02-16 10:38:08 -05:00
comfyanonymous
bc69fb5245 Use inpaint models the proper way by using VAEEncodeForInpaint. 2023-02-15 20:44:51 -05:00
comfyanonymous
cef2cc3cb0 Support for inpaint models. 2023-02-15 16:38:20 -05:00
comfyanonymous
07db00355f Add masks to samplers code for inpainting. 2023-02-15 13:16:38 -05:00
comfyanonymous
e3451cea4f uni_pc now works with KSamplerAdvanced return_with_leftover_noise. 2023-02-13 12:29:21 -05:00
comfyanonymous
f542f248f1 Show the right amount of steps in the progress bar for uni_pc.
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2023-02-11 14:59:42 -05:00
comfyanonymous
f10b8948c3 768-v support for uni_pc sampler. 2023-02-11 04:34:58 -05:00
comfyanonymous
ce0aeb109e Remove print. 2023-02-11 03:41:40 -05:00