Jedrzej Kosinski
a786ce5ead
Merge branch 'master' into worksplit-multigpu
2025-03-26 22:26:26 -05:00
comfyanonymous
8edc1f44c1
Support more float8 types.
2025-03-25 05:23:49 -04:00
Jedrzej Kosinski
219d3cd0d0
Merge branch 'master' into worksplit-multigpu
2025-03-17 14:26:35 -05:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. ( #7223 )
...
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
Jedrzej Kosinski
cc928a786d
Merge branch 'master' into worksplit-multigpu
2025-03-13 20:59:11 -05:00
comfyanonymous
35504e2f93
Fix.
2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed
Print mac version.
2025-03-13 10:05:40 -04:00
Jedrzej Kosinski
6e144b98c4
Merge branch 'master' into worksplit-multigpu
2025-03-09 00:00:38 -06:00
comfyanonymous
0952569493
Fix stable cascade VAE on some lowvram machines.
2025-03-08 20:24:04 -05:00
Jedrzej Kosinski
6dca17bd2d
Satisfy ruff linting
2025-03-03 23:08:29 -06:00
Jedrzej Kosinski
5080105c23
Merge branch 'master' into worksplit-multigpu
2025-03-03 22:56:53 -06:00
Jedrzej Kosinski
093914a247
Made MultiGPU Work Units node more robust by forcing ModelPatcher clones to match at sample time, reuse loaded MultiGPU clones, finalize MultiGPU Work Units node ID and name, small refactors/cleanup of logging and multigpu-related code
2025-03-03 22:56:13 -06:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options ( #7024 )
2025-03-01 02:37:35 -05:00
comfyanonymous
cf0b549d48
--fast now takes a number as argument to indicate how fast you want it.
...
The idea is that you can indicate how much quality vs speed you want.
At the moment:
--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.
--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b
Use fp16 for intermediate for fp8 weights with --fast if supported.
2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952
Use fp16 if checkpoint weights are fp16 and the model supports it.
2025-02-27 16:39:57 -05:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU ( #6964 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
Jedrzej Kosinski
605893d3cf
Merge branch 'master' into worksplit-multigpu
2025-02-24 19:23:16 -06:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
Jedrzej Kosinski
048f4f0b3a
Merge branch 'master' into worksplit-multigpu
2025-02-17 19:35:58 -06:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
Jedrzej Kosinski
d2504fb701
Merge branch 'master' into worksplit-multigpu
2025-02-11 22:34:51 -06:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast
( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
Jedrzej Kosinski
476aa79b64
Let --cuda-device take in a string to allow multiple devices (or device order) to be chosen, print available devices on startup, potentially support MultiGPU Intel and Ascend setups
2025-02-06 08:44:07 -06:00
Jedrzej Kosinski
0b3233b4e2
Merge remote-tracking branch 'origin/master' into multigpu_support
2025-01-28 06:11:07 -06:00
comfyanonymous
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00
comfyanonymous
67feb05299
Remove redundant code.
2025-01-25 19:04:53 -05:00
Jedrzej Kosinski
5db4277449
Make sure additional_models are unloaded as well when perform
2025-01-23 19:06:05 -06:00
Jedrzej Kosinski
02a4d0ad7d
Added unload_model_and_clones to model_management.py to allow unloading only relevant models
2025-01-23 01:20:00 -06:00
Jedrzej Kosinski
7448f02b7c
Initial proof of concept of giving splitting cond sampling between multiple GPUs
2025-01-08 03:33:05 -06:00
Jedrzej Kosinski
871258aa72
Add get_all_torch_devices to get detected devices intended for current torch hardware device
2025-01-07 21:06:03 -06:00
comfyanonymous
d45ebb63f6
Remove old unused function.
2025-01-04 07:20:54 -05:00
comfyanonymous
9e9c8a1c64
Clear cache as often on AMD as Nvidia.
...
I think the issue this was working around has been solved.
If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
2025-01-02 08:44:16 -05:00
comfyanonymous
160ca08138
Use python 3.9 in launch test instead of 3.8
...
Fix ruff check.
2024-12-26 20:05:54 -05:00