comfyanonymous
1c2d45d2b5
Fix typo in last PR. ( #8144 )
...
More robust model detection for future proofing.
2025-05-15 19:02:19 -04:00
George0726
c820ef950d
Add Wan-FUN Camera Control models and Add WanCameraImageToVideo node ( #8013 )
...
* support wan camera models
* fix by ruff check
* change camera_condition type; make camera_condition optional
* support camera trajectory nodes
* fix camera direction
---------
Co-authored-by: Qirui Sun <sunqr0667@126.com>
2025-05-15 19:00:43 -04:00
comfyanonymous
16417b40d9
Initial ACE-Step model implementation. ( #7972 )
2025-05-07 08:33:34 -04:00
comfyanonymous
271c9c5b9e
Better mem estimation for the LTXV 13B model. ( #7963 )
2025-05-06 09:52:37 -04:00
comfyanonymous
08ff5fa08a
Cleanup chroma PR.
2025-04-30 20:57:30 -04:00
Silver
4ca3d84277
Support for Chroma - Flux1 Schnell distilled with CFG ( #7355 )
...
* Upload files for Chroma Implementation
* Remove trailing whitespace
* trim more trailing whitespace..oops
* remove unused imports
* Add supported_inference_dtypes
* Set min_length to 0 and remove attention_mask=True
* Set min_length to 1
* get_mdulations added from blepping and minor changes
* Add lora conversion if statement in lora.py
* Update supported_models.py
* update model_base.py
* add uptream commits
* set modelType.FLOW, will cause beta scheduler to work properly
* Adjust memory usage factor and remove unnecessary code
* fix mistake
* reduce code duplication
* remove unused imports
* refactor for upstream sync
* sync chroma-support with upstream via syncbranch patch
* Update sd.py
* Add Chroma as option for the OptimalStepsScheduler node
2025-04-30 20:57:00 -04:00
comfyanonymous
dbc726f80c
Better vace memory estimation. ( #7875 )
2025-04-29 20:42:00 -04:00
comfyanonymous
ce22f687cc
Support for WAN VACE preview model. ( #7711 )
...
* Support for WAN VACE preview model.
* Remove print.
2025-04-21 14:40:29 -04:00
comfyanonymous
9ad792f927
Basic support for hidream i1 model.
2025-04-15 17:35:05 -04:00
comfyanonymous
3661c833bc
Support the WAN 2.1 fun control models.
...
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
thot experiment
83e839a89b
Native LotusD Implementation ( #7125 )
...
* draft pass at a native comfy implementation of Lotus-D depth and normal est
* fix model_sampling kludges
* fix ruff
---------
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
comfyanonymous
3872b43d4b
A few fixes for the hunyuan3d models.
2025-03-20 04:52:31 -04:00
comfyanonymous
11f1b41bab
Initial Hunyuan3Dv2 implementation.
...
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
comfyanonymous
11b1f27cb1
Set WAN default compute dtype to fp16.
2025-03-07 04:52:36 -05:00
comfyanonymous
29a70ca101
Support HunyuanVideo image to video model.
2025-03-06 03:07:15 -05:00
comfyanonymous
9c9a7f012a
Adjust ltxv memory factor.
2025-03-05 05:16:05 -05:00
comfyanonymous
7c7c70c400
Refactor skyreels i2v code.
2025-03-04 00:15:45 -05:00
comfyanonymous
b6fefe686b
Better wan memory estimation.
2025-02-26 07:51:22 -05:00
comfyanonymous
4ced06b879
WIP support for Wan I2V model.
2025-02-26 01:49:43 -05:00
comfyanonymous
cb06e9669b
Wan seems to work with fp16.
2025-02-25 21:37:12 -05:00
comfyanonymous
63023011b9
WIP support for Wan t2v model.
2025-02-25 17:20:35 -05:00
comfyanonymous
37cd448529
Set the shift for Lumina back to 6.
2025-02-05 14:49:52 -05:00
comfyanonymous
8ac2dddeed
Lower the default shift of lumina to reduce artifacts.
2025-02-04 06:50:37 -05:00
comfyanonymous
e5ea112a90
Support Lumina 2 model.
2025-02-04 04:16:30 -05:00
comfyanonymous
88ceb28e20
Tweak hunyuan memory usage factor.
2025-01-16 06:31:03 -05:00
comfyanonymous
9d8b6c1f46
More accurate memory estimation for cosmos and hunyuan video.
2025-01-16 03:48:40 -05:00
comfyanonymous
3aaabb12d4
Implement Cosmos Image/Video to World (Video) diffusion models.
...
Use CosmosImageToVideoLatent to set the input image/video.
2025-01-14 05:14:10 -05:00
comfyanonymous
2ff3104f70
WIP support for Nvidia Cosmos 7B and 14B text to world (video) models.
2025-01-10 09:14:16 -05:00
comfyanonymous
4b5bcd8ac4
Closer memory estimation for hunyuan dit model.
2024-12-27 07:37:00 -05:00
comfyanonymous
ceb50b2cbf
Closer memory estimation for pixart models.
2024-12-27 07:30:09 -05:00
City
bddb02660c
Add PixArt model support ( #6055 )
...
* PixArt initial version
* PixArt Diffusers convert logic
* pos_emb and interpolation logic
* Reduce duplicate code
* Formatting
* Use optimized attention
* Edit empty token logic
* Basic PixArt LoRA support
* Fix aspect ratio logic
* PixArtAlpha text encode with conds
* Use same detection key logic for PixArt diffusers
2024-12-20 15:25:00 -05:00
comfyanonymous
d6656b0c0c
Support llama hunyuan video text encoder in scaled fp8 format.
2024-12-17 04:19:22 -05:00
comfyanonymous
39b1fc4ccc
Adjust used dtypes for hunyuan video VAE and diffusion model.
2024-12-16 23:31:10 -05:00
comfyanonymous
bda1482a27
Basic Hunyuan Video model support.
2024-12-16 19:35:40 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
comfyanonymous
b7143b74ce
Flux inpaint model does not work in fp16.
2024-11-26 01:33:01 -05:00
comfyanonymous
5e16f1d24b
Support Lightricks LTX-Video model.
2024-11-22 08:46:39 -05:00
comfyanonymous
8b275ce5be
Support auto detecting some zsnr anime checkpoints.
2024-11-11 05:34:11 -05:00
comfyanonymous
669d9e4c67
Set default shift on mochi to 6.0
2024-10-27 22:21:04 -04:00
comfyanonymous
9ee0a6553a
float16 inference is a bit broken on mochi.
2024-10-27 04:56:40 -04:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
comfyanonymous
8f60d093ba
Fix issue.
2024-08-22 10:38:24 -04:00
comfyanonymous
0f9c2a7822
Try to fix SDXL OOM issue on some configurations.
2024-08-14 23:08:54 -04:00
comfyanonymous
8115d8cce9
Add Flux fp16 support hack.
2024-08-07 15:08:39 -04:00
comfyanonymous
2d75df45e6
Flux tweak memory usage.
2024-08-05 21:58:28 -04:00
comfyanonymous
f123328b82
Load T5 in fp8 if it's in fp8 in the Flux checkpoint.
2024-08-03 12:39:33 -04:00
comfyanonymous
ea03c9dcd2
Better per model memory usage estimations.
2024-08-02 18:09:24 -04:00
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
2024-08-01 09:49:29 -04:00
comfyanonymous
4ba7fa0244
Refactor: Move sd2_clip.py to text_encoders folder.
2024-07-28 01:19:20 -04:00