mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-04-16 08:33:29 +00:00
Compare commits
16 Commits
c6175b69a5
...
f67488a578
Author | SHA1 | Date | |
---|---|---|---|
![]() |
f67488a578 | ||
![]() |
22ad513c72 | ||
![]() |
ed945a1790 | ||
![]() |
f9207c6936 | ||
![]() |
8ad7477647 | ||
![]() |
98bdca4cb2 | ||
![]() |
a26da20a76 | ||
![]() |
e346d8584e | ||
![]() |
ab31b64412 | ||
![]() |
fe29739c68 | ||
![]() |
e8345a9b7b | ||
![]() |
8c6b9f4481 | ||
![]() |
cc7e023a4a | ||
![]() |
2f7d8159c3 | ||
![]() |
64d640171c | ||
![]() |
636374964d |
@ -245,7 +245,7 @@ You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS ve
|
|||||||
|
|
||||||
1. Install pytorch nightly. For instructions, read the [Accelerated PyTorch training on Mac](https://developer.apple.com/metal/pytorch/) Apple Developer guide (make sure to install the latest pytorch nightly).
|
1. Install pytorch nightly. For instructions, read the [Accelerated PyTorch training on Mac](https://developer.apple.com/metal/pytorch/) Apple Developer guide (make sure to install the latest pytorch nightly).
|
||||||
1. Follow the [ComfyUI manual installation](#manual-install-windows-linux) instructions for Windows and Linux.
|
1. Follow the [ComfyUI manual installation](#manual-install-windows-linux) instructions for Windows and Linux.
|
||||||
1. Install the ComfyUI [dependencies](#dependencies). If you have another Stable Diffusion UI [you might be able to reuse the dependencies](#i-already-have-another-ui-for-stable-diffusion-installed-do-i-really-have-to-install-all-of-these-dependencies).
|
1. Install the ComfyUI [dependencies](#dependencies). If you have another Stable Diffusion UI you might be able to reuse the dependencies.
|
||||||
1. Launch ComfyUI by running `python main.py`
|
1. Launch ComfyUI by running `python main.py`
|
||||||
|
|
||||||
> **Note**: Remember to add your models, VAE, LoRAs etc. to the corresponding Comfy folders, as discussed in [ComfyUI manual installation](#manual-install-windows-linux).
|
> **Note**: Remember to add your models, VAE, LoRAs etc. to the corresponding Comfy folders, as discussed in [ComfyUI manual installation](#manual-install-windows-linux).
|
||||||
|
297
READMEcn.md
Normal file
297
READMEcn.md
Normal file
@ -0,0 +1,297 @@
|
|||||||
|
<div align="center">
|
||||||
|
|
||||||
|
# ComfyUI
|
||||||
|
**功能最强大的模块化diffusion模型的图形用户界面和后端。**
|
||||||
|
|
||||||
|
|
||||||
|
[![Website][website-shield]][website-url]
|
||||||
|
[![Dynamic JSON Badge][discord-shield]][discord-url]
|
||||||
|
[![Matrix][matrix-shield]][matrix-url]
|
||||||
|
<br>
|
||||||
|
[![][github-release-shield]][github-release-link]
|
||||||
|
[![][github-release-date-shield]][github-release-link]
|
||||||
|
[![][github-downloads-shield]][github-downloads-link]
|
||||||
|
[![][github-downloads-latest-shield]][github-downloads-link]
|
||||||
|
|
||||||
|
[matrix-shield]: https://img.shields.io/badge/Matrix-000000?style=flat&logo=matrix&logoColor=white
|
||||||
|
[matrix-url]: https://app.element.io/#/room/%23comfyui_space%3Amatrix.org
|
||||||
|
[website-shield]: https://img.shields.io/badge/ComfyOrg-4285F4?style=flat
|
||||||
|
[website-url]: https://www.comfy.org/
|
||||||
|
<!-- Workaround to display total user from https://github.com/badges/shields/issues/4500#issuecomment-2060079995 -->
|
||||||
|
[discord-shield]: https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2Fcomfyorg%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&logo=discord&logoColor=white&label=Discord&color=green&suffix=%20total
|
||||||
|
[discord-url]: https://www.comfy.org/discord
|
||||||
|
|
||||||
|
[github-release-shield]: https://img.shields.io/github/v/release/comfyanonymous/ComfyUI?style=flat&sort=semver
|
||||||
|
[github-release-link]: https://github.com/comfyanonymous/ComfyUI/releases
|
||||||
|
[github-release-date-shield]: https://img.shields.io/github/release-date/comfyanonymous/ComfyUI?style=flat
|
||||||
|
[github-downloads-shield]: https://img.shields.io/github/downloads/comfyanonymous/ComfyUI/total?style=flat
|
||||||
|
[github-downloads-latest-shield]: https://img.shields.io/github/downloads/comfyanonymous/ComfyUI/latest/total?style=flat&label=downloads%40latest
|
||||||
|
[github-downloads-link]: https://github.com/comfyanonymous/ComfyUI/releases
|
||||||
|
|
||||||
|

|
||||||
|
</div>
|
||||||
|
|
||||||
|
该界面可让您使用基于图形/节点/流程图的界面来设计和执行高级的stable diffusion工作流。如需了解一些工作流程示例,并了解 ComfyUI 的功能,请访问以下网站:
|
||||||
|
### [ComfyUI 示例](https://comfyanonymous.github.io/ComfyUI_examples/)
|
||||||
|
|
||||||
|
### [安装 ComfyUI](#安装)
|
||||||
|
|
||||||
|
## 特性
|
||||||
|
- 节点/图形/流程图界面,用于实验和创建复杂的stable diffusion工作流,无需编写任何代码.
|
||||||
|
- 完全支持 SD1.x, SD2.x, [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/), [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/), [SD3](https://comfyanonymous.github.io/ComfyUI_examples/sd3/) and [Stable Audio](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
|
||||||
|
- [Flux 模型](https://comfyanonymous.github.io/ComfyUI_examples/flux/)
|
||||||
|
- 异步队列系统
|
||||||
|
- 许多优化:只重新执行工作流中在两次执行之间发生变化的部分。
|
||||||
|
- 智能内存管理: 可以自动运行模型在最低1GB虚拟显存的GPU上。
|
||||||
|
- 即使没有GPU也可以运行: ```--cpu``` (运行较慢)
|
||||||
|
- 可以加载 ckpt, safetensors and diffusers models/checkpoints. 独立的 VAEs and CLIP 模型。
|
||||||
|
- Embeddings/文本反转
|
||||||
|
- [Loras (regular, locon and loha)](https://comfyanonymous.github.io/ComfyUI_examples/lora/)
|
||||||
|
- [Hypernetworks](https://comfyanonymous.github.io/ComfyUI_examples/hypernetworks/)
|
||||||
|
- 从生成的 PNG, WebP 和 FLAC 文件中加载所有工作流 (with seeds) 。
|
||||||
|
- 以json文件格式保存和加载所有工作流.
|
||||||
|
-节点界面可用于创建复杂的工作流程, 例如 [Hires fix](https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/) 或更高级的工作流。
|
||||||
|
- [图像区域合成](https://comfyanonymous.github.io/ComfyUI_examples/area_composition/)
|
||||||
|
- [重绘](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/) 可常规重绘和使用重绘模型
|
||||||
|
- [ControlNet and T2I-Adapter](https://comfyanonymous.github.io/ComfyUI_examples/controlnet/)
|
||||||
|
- [放大模型 (ESRGAN, ESRGAN 变体, SwinIR, Swin2SR, 等)](https://comfyanonymous.github.io/ComfyUI_examples/upscale_models/)
|
||||||
|
- [unCLIP 模型](https://comfyanonymous.github.io/ComfyUI_examples/unclip/)
|
||||||
|
- [GLIGEN 模型](https://comfyanonymous.github.io/ComfyUI_examples/gligen/)
|
||||||
|
- [模型融合](https://comfyanonymous.github.io/ComfyUI_examples/model_merging/)
|
||||||
|
- [LCM 模型 and Loras](https://comfyanonymous.github.io/ComfyUI_examples/lcm/)
|
||||||
|
- [SDXL Turbo 模型](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)
|
||||||
|
- [AuraFlow 模型](https://comfyanonymous.github.io/ComfyUI_examples/aura_flow/)
|
||||||
|
- [HunyuanDiT 模型](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_dit/)
|
||||||
|
- 使用 [TAESD](#如何展示高质量预览) 高质量预览
|
||||||
|
- 启动很快.
|
||||||
|
- 完全离线工作: 将不必在下载任何内容.
|
||||||
|
- [配置文件](extra_model_paths.yaml.example)设定检索模型的路径.
|
||||||
|
|
||||||
|
工作流示例可以参见 [示例页面](https://comfyanonymous.github.io/ComfyUI_examples/)
|
||||||
|
|
||||||
|
## 快捷键
|
||||||
|
|
||||||
|
| 快捷键绑定 | 效果 |
|
||||||
|
|------------------------------------|--------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| Ctrl + Enter | 当前图作为生成队列 |
|
||||||
|
| Ctrl + Shift + Enter | 当前图作为第一个生成队列 |
|
||||||
|
| Ctrl + Alt + Enter | 取消当前生成 |
|
||||||
|
| Ctrl + Z/Ctrl + Y | 不做/重做 |
|
||||||
|
| Ctrl + S | 保存工作流 |
|
||||||
|
| Ctrl + O | 加载工作流 |
|
||||||
|
| Ctrl + A | 选中所有节点 |
|
||||||
|
| Alt + C | 收起/展开 选定节点 |
|
||||||
|
| Ctrl + M | 静音/取消静音 选定节点 |
|
||||||
|
| Ctrl + B | 绕过选定节点 (就像当前节点被移除出图并且线路重现连接穿过) |
|
||||||
|
| Delete/Backspace | 删除选定系节点 |
|
||||||
|
| Ctrl + Backspace | 删除当前图 |
|
||||||
|
| Space | 按住并移动光标时移动画布 |
|
||||||
|
| Ctrl/Shift + Click | 添加点击节点到选区 |
|
||||||
|
| Ctrl + C/Ctrl + V | 复制并粘贴选中节点 (而不与未选定节点的输出保持连接) |
|
||||||
|
| Ctrl + C/Ctrl + Shift + V | 复制和粘贴选定的节点(保持未选定节点的输出与粘贴节点的输入之间的连接) |
|
||||||
|
| Shift + Drag | 一次性移动多个选定节点 |
|
||||||
|
| Ctrl + D | 加载默认图 |
|
||||||
|
| Alt + `+` | 画面放大 |
|
||||||
|
| Alt + `-` | 画面缩小 |
|
||||||
|
| Ctrl + Shift + LMB + Vertical drag | 画面放大/画面缩小 |
|
||||||
|
| P | 按住/解除 选中节点 |
|
||||||
|
| Ctrl + G | 选中节点成组 |
|
||||||
|
| Q | 切换队列可见性 |
|
||||||
|
| H | 切换历史可见性 |
|
||||||
|
| R | 刷新图 |
|
||||||
|
| Double-Click LMB | 打开节点快速搜索面板 |
|
||||||
|
| Shift + Drag | 一次移动多条连线 |
|
||||||
|
| Ctrl + Alt + LMB | 取消选定插槽的所有连线 |
|
||||||
|
|
||||||
|
对于macOS用户,Ctrl键可以被Cmd键替换
|
||||||
|
|
||||||
|
# 安装
|
||||||
|
|
||||||
|
## Windows
|
||||||
|
|
||||||
|
在 [releases page](https://github.com/comfyanonymous/ComfyUI/releases) 上有一个适用于 Windows 的便携单机的构建版本,应当可以在 Nvidia GPU 上运行,也可以仅在 CPU 上运行。
|
||||||
|
|
||||||
|
### [下载链接](https://github.com/comfyanonymous/ComfyUI/releases/latest/download/ComfyUI_windows_portable_nvidia.7z)
|
||||||
|
|
||||||
|
下载后,使用 [7-Zip](https://7-zip.org) 解压并运行. 确保将 Stable Diffusion checkpoints/models (the huge ckpt/safetensors files) 放置于: ComfyUI\models\checkpoints
|
||||||
|
|
||||||
|
如果你遇到解压问题, 右键单击文件 -> 属性 -> 解除占用
|
||||||
|
|
||||||
|
#### 如何在其他 UI 和 ComfyUI 之间共享模型?
|
||||||
|
|
||||||
|
参见 [Config file](extra_model_paths.yaml.example) 去设定模型检索路径. 在独立的 Windows 构建版本中,您可以在 ComfyUI 目录中找到该文件。将此文件重命名为 extra_model_paths.yml,然后用你喜欢的文本编辑器进行编辑。
|
||||||
|
|
||||||
|
## Jupyter Notebook
|
||||||
|
|
||||||
|
在像 paperspace, kaggle 或 colab 等平台上运行服务,你可参见 [Jupyter Notebook](notebooks/comfyui_colab.ipynb)
|
||||||
|
|
||||||
|
## 手动安装 (Windows, Linux)
|
||||||
|
|
||||||
|
Git 克隆此项目.
|
||||||
|
|
||||||
|
放置 SD checkpoints (the huge ckpt/safetensors files) 到: models/checkpoints
|
||||||
|
|
||||||
|
放置 VAE 到: models/vae
|
||||||
|
|
||||||
|
|
||||||
|
### AMD GPUs (仅限 Linux )
|
||||||
|
如果尚未安装,AMD 用户可使用 pip 安装 rocm 和 pytorch,以下是安装稳定版的命令:
|
||||||
|
|
||||||
|
```pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1```
|
||||||
|
|
||||||
|
这是安装带有 ROCm 6.2 的nightly版本的命令,该版本可能会有一些性能改进:
|
||||||
|
|
||||||
|
```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.2```
|
||||||
|
|
||||||
|
### NVIDIA
|
||||||
|
|
||||||
|
Nvidia 用户应使用以下命令安装稳定的 pytorch:
|
||||||
|
|
||||||
|
```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124```
|
||||||
|
|
||||||
|
这是安装 pytorch nightly 的命令,性能可能会有所改善:
|
||||||
|
|
||||||
|
```pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124```
|
||||||
|
|
||||||
|
#### 问题排查
|
||||||
|
|
||||||
|
如果你遇到 "Torch not compiled with CUDA enabled" 错误, 使用命令:
|
||||||
|
|
||||||
|
```pip uninstall torch``` 卸载 torch
|
||||||
|
|
||||||
|
并且使用上述提到的命令重新安装.
|
||||||
|
|
||||||
|
### 依赖
|
||||||
|
|
||||||
|
在 ComfyUI 文件夹中打开终端,安装依赖:
|
||||||
|
|
||||||
|
```pip install -r requirements.txt```
|
||||||
|
|
||||||
|
在此之后,你应该安装完所有内容,并且可以运行 ComfyUI 了。.
|
||||||
|
|
||||||
|
### 其他安装选择:
|
||||||
|
|
||||||
|
#### Intel GPUs
|
||||||
|
|
||||||
|
Intel GPU 支持 适用于英特尔 Pytorch 扩展程序 (IPEX) 支持的所有英特尔 GPU , 支持要求列于 [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) 页面. 选择您的平台和安装方法,然后按说明操作.步骤如下:
|
||||||
|
|
||||||
|
1. 对于有需要的 Windows 和 Linux,首先安装上述 IPEX 安装页面中列出的或更新驱动程序或内核
|
||||||
|
1. 根据你的平台,按照指示说明安装 [Intel's oneAPI Basekit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) .
|
||||||
|
1. 根据你的平台,使用安装页面提供的说明安装 IPEX 包。
|
||||||
|
1. 依照 [ComfyUI manual installation](#手动安装-windows-linux)对于Windows 和 Linux的说明,如同上述所有内容安装完成后的描述照常运行ComfyUI
|
||||||
|
|
||||||
|
额外的讨论和帮助参见 [此处](https://github.com/comfyanonymous/ComfyUI/discussions/476).
|
||||||
|
|
||||||
|
#### Apple Mac silicon
|
||||||
|
|
||||||
|
您可以在苹果 Mac silicon(M1 或 M2)上安装 ComfyUI,并使用任何最新的 macOS 版本.
|
||||||
|
|
||||||
|
1.安装 pytorch nightly. 根据说明,阅读 [Accelerated PyTorch training on Mac](https://developer.apple.com/metal/pytorch/) Apple Developer guide (确保安装的是最新版 pytorch nightly).
|
||||||
|
1. 依照 [ComfyUI 手动安装](#手动安装-windows-linux) 对于 Windows and Linux的说明.
|
||||||
|
1. 安装 ComfyUI [依赖](#依赖). 如果已使用其他 Stable Diffusion UI 你也许可以重复使用这些依赖.
|
||||||
|
1. 运行 ComfyUI `python main.py`
|
||||||
|
|
||||||
|
> **注意**: 记得添加你的模型, VAE, LoRAs 等等到相应的 Comfy 文件夹, 如 [ComfyUI manual installation](#手动安装-windows-linux).中所述
|
||||||
|
|
||||||
|
#### DirectML (AMD Cards on Windows)
|
||||||
|
|
||||||
|
```pip install torch-directml``` 然后你可以运行ComfyUI: ```python main.py --directml```
|
||||||
|
|
||||||
|
# 运行
|
||||||
|
|
||||||
|
```python main.py```
|
||||||
|
|
||||||
|
### For AMD cards 不被 ROCm 官方支持
|
||||||
|
|
||||||
|
如果遇到问题,请尝试使用以下命令运行它:
|
||||||
|
|
||||||
|
对于 6700, 6600 或其他 RDNA2 或更旧版本: ```HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py```
|
||||||
|
|
||||||
|
对于 AMD 7600 或其他 RDNA3 cards: ```HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py```
|
||||||
|
|
||||||
|
# 注意
|
||||||
|
|
||||||
|
只有在图形的输出中包含所有正确输入的部分才会被执行。
|
||||||
|
|
||||||
|
只有每次执行都有变化的图表部分才会被执行,如果您两次提交相同的图表,只有第一次会被执行。如果您更改了图表的最后一部分,则只会执行您更改的部分和依赖于该部分的部分。
|
||||||
|
|
||||||
|
在网页上拖动生成的 png 或加载生成的 png,就能看到完整的工作流程,包括创建时使用的种子。
|
||||||
|
|
||||||
|
您可以使用()来改变单词或短语的强度,例如: (好代码:1.2)或(坏代码:0.8)。() 的默认强度是 1.1。要在实际提示符中使用()字符,请像 \\( 或 \\)一样将它们转义。
|
||||||
|
|
||||||
|
对于通配/动态提示词,可以使用 {day|night}。使用这种语法,“{wild|card|test}”将在每次排队提示时被前端随机替换为 “wild”、“card ”或 “test”。要在实际提示符中使用 {} 字符,请像下面这样转义:: \\{ 或 \\}.
|
||||||
|
|
||||||
|
动态提示还支持 C 风格注释, 如 `// comment` 或 `/* comment */`.
|
||||||
|
|
||||||
|
要在文本提示符中使用文本反转概念/embeddings 编码,请将它们放到 models/embeddings 目录中,并在 CLIPTextEncode 节点中使用,如下所示(可以省略 .pt 扩展名):
|
||||||
|
|
||||||
|
```embedding:embedding_filename.pt```
|
||||||
|
|
||||||
|
|
||||||
|
## 如何展示高质量预览?
|
||||||
|
|
||||||
|
使用 ```--preview-method auto``` 允许预览.
|
||||||
|
|
||||||
|
默认安装包括一个快速的低分辨率预览方式. 要允许高质量预览使用 [TAESD](https://github.com/madebyollin/taesd), 下载 [taesd_decoder.pth, taesdxl_decoder.pth, taesd3_decoder.pth and taef1_decoder.pth](https://github.com/madebyollin/taesd/) 并且放置到 `models/vae_approx` 文件夹. 一旦安装完成, 重启ComfyUI并通过 `--preview-method taesd` 运行以允许高质量预览.
|
||||||
|
|
||||||
|
## 如何使用 TLS/SSL?
|
||||||
|
运行命令生成自签名证书(不适合共享/生产使用)和密钥: `openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"`
|
||||||
|
|
||||||
|
使用 `--tls-keyfile key.pem --tls-certfile cert.pem` 允许 TLS/SSL, 应用可以访问 `https://...` 或 `http://...`.
|
||||||
|
|
||||||
|
> 注意: Windows 用户可以使用 [alexisrolland/docker-openssl](https://github.com/alexisrolland/docker-openssl) 或其中一个 [3rd party binary distributions](https://wiki.openssl.org/index.php/Binaries) 运行上述命令示例.
|
||||||
|
<br/><br/>如果你使用容器, 注意 volume mount `-v` can be a relative path so `... -v ".\:/openssl-certs" ...` 将在命令提示符或 powershell 终端的当前目录下创建密钥和证书文件.
|
||||||
|
|
||||||
|
## 支持和开发频道
|
||||||
|
|
||||||
|
[Matrix space: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) (很像discord但是是开源的).
|
||||||
|
|
||||||
|
另可见: [https://www.comfy.org/](https://www.comfy.org/)
|
||||||
|
|
||||||
|
## 前端开发
|
||||||
|
|
||||||
|
自 2024 年 8 月 15 日起,我们已过渡到新的前端,该前端现在托管在一个单独的存储库中: [ComfyUI Frontend](https://github.com/Comfy-Org/ComfyUI_frontend). 现在,此项目将编译后的JS (来自 TS/Vue) 存放在 `web/` 目录下.
|
||||||
|
|
||||||
|
### 报告问题 和 需要功能特性
|
||||||
|
|
||||||
|
有关前端的任何错误、问题或功能请求,请使用 [ComfyUI Frontend repository](https://github.com/Comfy-Org/ComfyUI_frontend). 这将有助于我们更有效地管理和解决前端的特定问题。
|
||||||
|
|
||||||
|
### 使用最新的前端
|
||||||
|
|
||||||
|
新版前端是 ComfyUI 的默认前端,但是,请注意:
|
||||||
|
|
||||||
|
1. 在 ComfyUI 项目中,这一前端是每周更新的。
|
||||||
|
2. 在独立的前端项目中是每日发布的.
|
||||||
|
|
||||||
|
要使用最新的前端版本:
|
||||||
|
|
||||||
|
1. 对于最新的每日发布版本, 以下列命令行参数启动ComfyUI:
|
||||||
|
|
||||||
|
```
|
||||||
|
--front-end-version Comfy-Org/ComfyUI_frontend@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
2. 对于特定的版本, 用需要的版本号替换 `latest` :
|
||||||
|
|
||||||
|
```
|
||||||
|
--front-end-version Comfy-Org/ComfyUI_frontend@1.2.2
|
||||||
|
```
|
||||||
|
|
||||||
|
通过这种方法,您可以在稳定的每周发布版本和最先进的每日更新版本之间轻松切换,甚至可以切换到特定版本进行测试.
|
||||||
|
|
||||||
|
### 使用旧版前端
|
||||||
|
|
||||||
|
如果出于任何原因需要使用传统前端,可以使用以下命令行参数访问它:
|
||||||
|
|
||||||
|
```
|
||||||
|
--front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
这将使用保存在 [ComfyUI Legacy Frontend repository](https://github.com/Comfy-Org/ComfyUI_legacy_frontend) 中的旧版前端快照
|
||||||
|
|
||||||
|
# 常见问题
|
||||||
|
|
||||||
|
### 我应该买什么样的GPU?
|
||||||
|
|
||||||
|
[参见此网页的推荐](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI)
|
||||||
|
|
@ -101,6 +101,7 @@ parser.add_argument("--preview-size", type=int, default=512, help="Sets the maxi
|
|||||||
cache_group = parser.add_mutually_exclusive_group()
|
cache_group = parser.add_mutually_exclusive_group()
|
||||||
cache_group.add_argument("--cache-classic", action="store_true", help="Use the old style (aggressive) caching.")
|
cache_group.add_argument("--cache-classic", action="store_true", help="Use the old style (aggressive) caching.")
|
||||||
cache_group.add_argument("--cache-lru", type=int, default=0, help="Use LRU caching with a maximum of N node results cached. May use more RAM/VRAM.")
|
cache_group.add_argument("--cache-lru", type=int, default=0, help="Use LRU caching with a maximum of N node results cached. May use more RAM/VRAM.")
|
||||||
|
cache_group.add_argument("--cache-none", action="store_true", help="Reduced RAM/VRAM usage at the expense of executing every node for each run.")
|
||||||
|
|
||||||
attn_group = parser.add_mutually_exclusive_group()
|
attn_group = parser.add_mutually_exclusive_group()
|
||||||
attn_group.add_argument("--use-split-cross-attention", action="store_true", help="Use the split cross attention optimization. Ignored when xformers is used.")
|
attn_group.add_argument("--use-split-cross-attention", action="store_true", help="Use the split cross attention optimization. Ignored when xformers is used.")
|
||||||
|
@ -102,9 +102,13 @@ class InputTypeOptions(TypedDict):
|
|||||||
default: bool | str | float | int | list | tuple
|
default: bool | str | float | int | list | tuple
|
||||||
"""The default value of the widget"""
|
"""The default value of the widget"""
|
||||||
defaultInput: bool
|
defaultInput: bool
|
||||||
"""Defaults to an input slot rather than a widget"""
|
"""@deprecated in v1.16 frontend. v1.16 frontend allows input socket and widget to co-exist.
|
||||||
|
- defaultInput on required inputs should be dropped.
|
||||||
|
- defaultInput on optional inputs should be replaced with forceInput.
|
||||||
|
Ref: https://github.com/Comfy-Org/ComfyUI_frontend/pull/3364
|
||||||
|
"""
|
||||||
forceInput: bool
|
forceInput: bool
|
||||||
"""`defaultInput` and also don't allow converting to a widget"""
|
"""Forces the input to be an input slot rather than a widget even a widget is available for the input type."""
|
||||||
lazy: bool
|
lazy: bool
|
||||||
"""Declares that this input uses lazy evaluation"""
|
"""Declares that this input uses lazy evaluation"""
|
||||||
rawLink: bool
|
rawLink: bool
|
||||||
|
@ -48,6 +48,7 @@ def get_all_callbacks(call_type: str, transformer_options: dict, is_model_option
|
|||||||
|
|
||||||
class WrappersMP:
|
class WrappersMP:
|
||||||
OUTER_SAMPLE = "outer_sample"
|
OUTER_SAMPLE = "outer_sample"
|
||||||
|
PREPARE_SAMPLING = "prepare_sampling"
|
||||||
SAMPLER_SAMPLE = "sampler_sample"
|
SAMPLER_SAMPLE = "sampler_sample"
|
||||||
CALC_COND_BATCH = "calc_cond_batch"
|
CALC_COND_BATCH = "calc_cond_batch"
|
||||||
APPLY_MODEL = "apply_model"
|
APPLY_MODEL = "apply_model"
|
||||||
|
@ -106,6 +106,13 @@ def cleanup_additional_models(models):
|
|||||||
|
|
||||||
|
|
||||||
def prepare_sampling(model: ModelPatcher, noise_shape, conds, model_options=None):
|
def prepare_sampling(model: ModelPatcher, noise_shape, conds, model_options=None):
|
||||||
|
executor = comfy.patcher_extension.WrapperExecutor.new_executor(
|
||||||
|
_prepare_sampling,
|
||||||
|
comfy.patcher_extension.get_all_wrappers(comfy.patcher_extension.WrappersMP.PREPARE_SAMPLING, model_options, is_model_options=True)
|
||||||
|
)
|
||||||
|
return executor.execute(model, noise_shape, conds, model_options=model_options)
|
||||||
|
|
||||||
|
def _prepare_sampling(model: ModelPatcher, noise_shape, conds, model_options=None):
|
||||||
real_model: BaseModel = None
|
real_model: BaseModel = None
|
||||||
models, inference_memory = get_additional_models(conds, model.model_dtype())
|
models, inference_memory = get_additional_models(conds, model.model_dtype())
|
||||||
models += get_additional_models_from_model_options(model_options)
|
models += get_additional_models_from_model_options(model_options)
|
||||||
|
@ -316,3 +316,156 @@ class LRUCache(BasicCache):
|
|||||||
self.children[cache_key].append(self.cache_key_set.get_data_key(child_id))
|
self.children[cache_key].append(self.cache_key_set.get_data_key(child_id))
|
||||||
return self
|
return self
|
||||||
|
|
||||||
|
|
||||||
|
class DependencyAwareCache(BasicCache):
|
||||||
|
"""
|
||||||
|
A cache implementation that tracks dependencies between nodes and manages
|
||||||
|
their execution and caching accordingly. It extends the BasicCache class.
|
||||||
|
Nodes are removed from this cache once all of their descendants have been
|
||||||
|
executed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, key_class):
|
||||||
|
"""
|
||||||
|
Initialize the DependencyAwareCache.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
key_class: The class used for generating cache keys.
|
||||||
|
"""
|
||||||
|
super().__init__(key_class)
|
||||||
|
self.descendants = {} # Maps node_id -> set of descendant node_ids
|
||||||
|
self.ancestors = {} # Maps node_id -> set of ancestor node_ids
|
||||||
|
self.executed_nodes = set() # Tracks nodes that have been executed
|
||||||
|
|
||||||
|
def set_prompt(self, dynprompt, node_ids, is_changed_cache):
|
||||||
|
"""
|
||||||
|
Clear the entire cache and rebuild the dependency graph.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
dynprompt: The dynamic prompt object containing node information.
|
||||||
|
node_ids: List of node IDs to initialize the cache for.
|
||||||
|
is_changed_cache: Flag indicating if the cache has changed.
|
||||||
|
"""
|
||||||
|
# Clear all existing cache data
|
||||||
|
self.cache.clear()
|
||||||
|
self.subcaches.clear()
|
||||||
|
self.descendants.clear()
|
||||||
|
self.ancestors.clear()
|
||||||
|
self.executed_nodes.clear()
|
||||||
|
|
||||||
|
# Call the parent method to initialize the cache with the new prompt
|
||||||
|
super().set_prompt(dynprompt, node_ids, is_changed_cache)
|
||||||
|
|
||||||
|
# Rebuild the dependency graph
|
||||||
|
self._build_dependency_graph(dynprompt, node_ids)
|
||||||
|
|
||||||
|
def _build_dependency_graph(self, dynprompt, node_ids):
|
||||||
|
"""
|
||||||
|
Build the dependency graph for all nodes.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
dynprompt: The dynamic prompt object containing node information.
|
||||||
|
node_ids: List of node IDs to build the graph for.
|
||||||
|
"""
|
||||||
|
self.descendants.clear()
|
||||||
|
self.ancestors.clear()
|
||||||
|
for node_id in node_ids:
|
||||||
|
self.descendants[node_id] = set()
|
||||||
|
self.ancestors[node_id] = set()
|
||||||
|
|
||||||
|
for node_id in node_ids:
|
||||||
|
inputs = dynprompt.get_node(node_id)["inputs"]
|
||||||
|
for input_data in inputs.values():
|
||||||
|
if is_link(input_data): # Check if the input is a link to another node
|
||||||
|
ancestor_id = input_data[0]
|
||||||
|
self.descendants[ancestor_id].add(node_id)
|
||||||
|
self.ancestors[node_id].add(ancestor_id)
|
||||||
|
|
||||||
|
def set(self, node_id, value):
|
||||||
|
"""
|
||||||
|
Mark a node as executed and store its value in the cache.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
node_id: The ID of the node to store.
|
||||||
|
value: The value to store for the node.
|
||||||
|
"""
|
||||||
|
self._set_immediate(node_id, value)
|
||||||
|
self.executed_nodes.add(node_id)
|
||||||
|
self._cleanup_ancestors(node_id)
|
||||||
|
|
||||||
|
def get(self, node_id):
|
||||||
|
"""
|
||||||
|
Retrieve the cached value for a node.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
node_id: The ID of the node to retrieve.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The cached value for the node.
|
||||||
|
"""
|
||||||
|
return self._get_immediate(node_id)
|
||||||
|
|
||||||
|
def ensure_subcache_for(self, node_id, children_ids):
|
||||||
|
"""
|
||||||
|
Ensure a subcache exists for a node and update dependencies.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
node_id: The ID of the parent node.
|
||||||
|
children_ids: List of child node IDs to associate with the parent node.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The subcache object for the node.
|
||||||
|
"""
|
||||||
|
subcache = super()._ensure_subcache(node_id, children_ids)
|
||||||
|
for child_id in children_ids:
|
||||||
|
self.descendants[node_id].add(child_id)
|
||||||
|
self.ancestors[child_id].add(node_id)
|
||||||
|
return subcache
|
||||||
|
|
||||||
|
def _cleanup_ancestors(self, node_id):
|
||||||
|
"""
|
||||||
|
Check if ancestors of a node can be removed from the cache.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
node_id: The ID of the node whose ancestors are to be checked.
|
||||||
|
"""
|
||||||
|
for ancestor_id in self.ancestors.get(node_id, []):
|
||||||
|
if ancestor_id in self.executed_nodes:
|
||||||
|
# Remove ancestor if all its descendants have been executed
|
||||||
|
if all(descendant in self.executed_nodes for descendant in self.descendants[ancestor_id]):
|
||||||
|
self._remove_node(ancestor_id)
|
||||||
|
|
||||||
|
def _remove_node(self, node_id):
|
||||||
|
"""
|
||||||
|
Remove a node from the cache.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
node_id: The ID of the node to remove.
|
||||||
|
"""
|
||||||
|
cache_key = self.cache_key_set.get_data_key(node_id)
|
||||||
|
if cache_key in self.cache:
|
||||||
|
del self.cache[cache_key]
|
||||||
|
subcache_key = self.cache_key_set.get_subcache_key(node_id)
|
||||||
|
if subcache_key in self.subcaches:
|
||||||
|
del self.subcaches[subcache_key]
|
||||||
|
|
||||||
|
def clean_unused(self):
|
||||||
|
"""
|
||||||
|
Clean up unused nodes. This is a no-op for this cache implementation.
|
||||||
|
"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def recursive_debug_dump(self):
|
||||||
|
"""
|
||||||
|
Dump the cache and dependency graph for debugging.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list containing the cache state and dependency graph.
|
||||||
|
"""
|
||||||
|
result = super().recursive_debug_dump()
|
||||||
|
result.append({
|
||||||
|
"descendants": self.descendants,
|
||||||
|
"ancestors": self.ancestors,
|
||||||
|
"executed_nodes": list(self.executed_nodes),
|
||||||
|
})
|
||||||
|
return result
|
||||||
|
@ -209,6 +209,196 @@ def voxel_to_mesh(voxels, threshold=0.5, device=None):
|
|||||||
vertices = torch.fliplr(vertices)
|
vertices = torch.fliplr(vertices)
|
||||||
return vertices, faces
|
return vertices, faces
|
||||||
|
|
||||||
|
def voxel_to_mesh_surfnet(voxels, threshold=0.5, device=None):
|
||||||
|
if device is None:
|
||||||
|
device = torch.device("cpu")
|
||||||
|
voxels = voxels.to(device)
|
||||||
|
|
||||||
|
D, H, W = voxels.shape
|
||||||
|
|
||||||
|
padded = torch.nn.functional.pad(voxels, (1, 1, 1, 1, 1, 1), 'constant', 0)
|
||||||
|
z, y, x = torch.meshgrid(
|
||||||
|
torch.arange(D, device=device),
|
||||||
|
torch.arange(H, device=device),
|
||||||
|
torch.arange(W, device=device),
|
||||||
|
indexing='ij'
|
||||||
|
)
|
||||||
|
cell_positions = torch.stack([z.flatten(), y.flatten(), x.flatten()], dim=1)
|
||||||
|
|
||||||
|
corner_offsets = torch.tensor([
|
||||||
|
[0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0],
|
||||||
|
[0, 0, 1], [1, 0, 1], [0, 1, 1], [1, 1, 1]
|
||||||
|
], device=device)
|
||||||
|
|
||||||
|
corner_values = torch.zeros((cell_positions.shape[0], 8), device=device)
|
||||||
|
for c, (dz, dy, dx) in enumerate(corner_offsets):
|
||||||
|
corner_values[:, c] = padded[
|
||||||
|
cell_positions[:, 0] + dz,
|
||||||
|
cell_positions[:, 1] + dy,
|
||||||
|
cell_positions[:, 2] + dx
|
||||||
|
]
|
||||||
|
|
||||||
|
corner_signs = corner_values > threshold
|
||||||
|
has_inside = torch.any(corner_signs, dim=1)
|
||||||
|
has_outside = torch.any(~corner_signs, dim=1)
|
||||||
|
contains_surface = has_inside & has_outside
|
||||||
|
|
||||||
|
active_cells = cell_positions[contains_surface]
|
||||||
|
active_signs = corner_signs[contains_surface]
|
||||||
|
active_values = corner_values[contains_surface]
|
||||||
|
|
||||||
|
if active_cells.shape[0] == 0:
|
||||||
|
return torch.zeros((0, 3), device=device), torch.zeros((0, 3), dtype=torch.long, device=device)
|
||||||
|
|
||||||
|
edges = torch.tensor([
|
||||||
|
[0, 1], [0, 2], [0, 4], [1, 3],
|
||||||
|
[1, 5], [2, 3], [2, 6], [3, 7],
|
||||||
|
[4, 5], [4, 6], [5, 7], [6, 7]
|
||||||
|
], device=device)
|
||||||
|
|
||||||
|
cell_vertices = {}
|
||||||
|
progress = comfy.utils.ProgressBar(100)
|
||||||
|
|
||||||
|
for edge_idx, (e1, e2) in enumerate(edges):
|
||||||
|
progress.update(1)
|
||||||
|
crossing = active_signs[:, e1] != active_signs[:, e2]
|
||||||
|
if not crossing.any():
|
||||||
|
continue
|
||||||
|
|
||||||
|
cell_indices = torch.nonzero(crossing, as_tuple=True)[0]
|
||||||
|
|
||||||
|
v1 = active_values[cell_indices, e1]
|
||||||
|
v2 = active_values[cell_indices, e2]
|
||||||
|
|
||||||
|
t = torch.zeros_like(v1, device=device)
|
||||||
|
denom = v2 - v1
|
||||||
|
valid = denom != 0
|
||||||
|
t[valid] = (threshold - v1[valid]) / denom[valid]
|
||||||
|
t[~valid] = 0.5
|
||||||
|
|
||||||
|
p1 = corner_offsets[e1].float()
|
||||||
|
p2 = corner_offsets[e2].float()
|
||||||
|
|
||||||
|
intersection = p1.unsqueeze(0) + t.unsqueeze(1) * (p2.unsqueeze(0) - p1.unsqueeze(0))
|
||||||
|
|
||||||
|
for i, point in zip(cell_indices.tolist(), intersection):
|
||||||
|
if i not in cell_vertices:
|
||||||
|
cell_vertices[i] = []
|
||||||
|
cell_vertices[i].append(point)
|
||||||
|
|
||||||
|
# Calculate the final vertices as the average of intersection points for each cell
|
||||||
|
vertices = []
|
||||||
|
vertex_lookup = {}
|
||||||
|
|
||||||
|
vert_progress_mod = round(len(cell_vertices)/50)
|
||||||
|
|
||||||
|
for i, points in cell_vertices.items():
|
||||||
|
if not i % vert_progress_mod:
|
||||||
|
progress.update(1)
|
||||||
|
|
||||||
|
if points:
|
||||||
|
vertex = torch.stack(points).mean(dim=0)
|
||||||
|
vertex = vertex + active_cells[i].float()
|
||||||
|
vertex_lookup[tuple(active_cells[i].tolist())] = len(vertices)
|
||||||
|
vertices.append(vertex)
|
||||||
|
|
||||||
|
if not vertices:
|
||||||
|
return torch.zeros((0, 3), device=device), torch.zeros((0, 3), dtype=torch.long, device=device)
|
||||||
|
|
||||||
|
final_vertices = torch.stack(vertices)
|
||||||
|
|
||||||
|
inside_corners_mask = active_signs
|
||||||
|
outside_corners_mask = ~active_signs
|
||||||
|
|
||||||
|
inside_counts = inside_corners_mask.sum(dim=1, keepdim=True).float()
|
||||||
|
outside_counts = outside_corners_mask.sum(dim=1, keepdim=True).float()
|
||||||
|
|
||||||
|
inside_pos = torch.zeros((active_cells.shape[0], 3), device=device)
|
||||||
|
outside_pos = torch.zeros((active_cells.shape[0], 3), device=device)
|
||||||
|
|
||||||
|
for i in range(8):
|
||||||
|
mask_inside = inside_corners_mask[:, i].unsqueeze(1)
|
||||||
|
mask_outside = outside_corners_mask[:, i].unsqueeze(1)
|
||||||
|
inside_pos += corner_offsets[i].float().unsqueeze(0) * mask_inside
|
||||||
|
outside_pos += corner_offsets[i].float().unsqueeze(0) * mask_outside
|
||||||
|
|
||||||
|
inside_pos /= inside_counts
|
||||||
|
outside_pos /= outside_counts
|
||||||
|
gradients = inside_pos - outside_pos
|
||||||
|
|
||||||
|
pos_dirs = torch.tensor([
|
||||||
|
[1, 0, 0],
|
||||||
|
[0, 1, 0],
|
||||||
|
[0, 0, 1]
|
||||||
|
], device=device)
|
||||||
|
|
||||||
|
cross_products = [
|
||||||
|
torch.linalg.cross(pos_dirs[i].float(), pos_dirs[j].float())
|
||||||
|
for i in range(3) for j in range(i+1, 3)
|
||||||
|
]
|
||||||
|
|
||||||
|
faces = []
|
||||||
|
all_keys = set(vertex_lookup.keys())
|
||||||
|
|
||||||
|
face_progress_mod = round(len(active_cells)/38*3)
|
||||||
|
|
||||||
|
for pair_idx, (i, j) in enumerate([(0,1), (0,2), (1,2)]):
|
||||||
|
dir_i = pos_dirs[i]
|
||||||
|
dir_j = pos_dirs[j]
|
||||||
|
cross_product = cross_products[pair_idx]
|
||||||
|
|
||||||
|
ni_positions = active_cells + dir_i
|
||||||
|
nj_positions = active_cells + dir_j
|
||||||
|
diag_positions = active_cells + dir_i + dir_j
|
||||||
|
|
||||||
|
alignments = torch.matmul(gradients, cross_product)
|
||||||
|
|
||||||
|
valid_quads = []
|
||||||
|
quad_indices = []
|
||||||
|
|
||||||
|
for idx, active_cell in enumerate(active_cells):
|
||||||
|
if not idx % face_progress_mod:
|
||||||
|
progress.update(1)
|
||||||
|
cell_key = tuple(active_cell.tolist())
|
||||||
|
ni_key = tuple(ni_positions[idx].tolist())
|
||||||
|
nj_key = tuple(nj_positions[idx].tolist())
|
||||||
|
diag_key = tuple(diag_positions[idx].tolist())
|
||||||
|
|
||||||
|
if cell_key in all_keys and ni_key in all_keys and nj_key in all_keys and diag_key in all_keys:
|
||||||
|
v0 = vertex_lookup[cell_key]
|
||||||
|
v1 = vertex_lookup[ni_key]
|
||||||
|
v2 = vertex_lookup[nj_key]
|
||||||
|
v3 = vertex_lookup[diag_key]
|
||||||
|
|
||||||
|
valid_quads.append((v0, v1, v2, v3))
|
||||||
|
quad_indices.append(idx)
|
||||||
|
|
||||||
|
for q_idx, (v0, v1, v2, v3) in enumerate(valid_quads):
|
||||||
|
cell_idx = quad_indices[q_idx]
|
||||||
|
if alignments[cell_idx] > 0:
|
||||||
|
faces.append(torch.tensor([v0, v1, v3], device=device, dtype=torch.long))
|
||||||
|
faces.append(torch.tensor([v0, v3, v2], device=device, dtype=torch.long))
|
||||||
|
else:
|
||||||
|
faces.append(torch.tensor([v0, v3, v1], device=device, dtype=torch.long))
|
||||||
|
faces.append(torch.tensor([v0, v2, v3], device=device, dtype=torch.long))
|
||||||
|
|
||||||
|
if faces:
|
||||||
|
faces = torch.stack(faces)
|
||||||
|
else:
|
||||||
|
faces = torch.zeros((0, 3), dtype=torch.long, device=device)
|
||||||
|
|
||||||
|
v_min = 0
|
||||||
|
v_max = max(D, H, W)
|
||||||
|
|
||||||
|
final_vertices = final_vertices - (v_min + v_max) / 2
|
||||||
|
|
||||||
|
scale = (v_max - v_min) / 2
|
||||||
|
if scale > 0:
|
||||||
|
final_vertices = final_vertices / scale
|
||||||
|
|
||||||
|
final_vertices = torch.fliplr(final_vertices)
|
||||||
|
|
||||||
|
return final_vertices, faces
|
||||||
|
|
||||||
class MESH:
|
class MESH:
|
||||||
def __init__(self, vertices, faces):
|
def __init__(self, vertices, faces):
|
||||||
@ -237,6 +427,34 @@ class VoxelToMeshBasic:
|
|||||||
|
|
||||||
return (MESH(torch.stack(vertices), torch.stack(faces)), )
|
return (MESH(torch.stack(vertices), torch.stack(faces)), )
|
||||||
|
|
||||||
|
class VoxelToMesh:
|
||||||
|
@classmethod
|
||||||
|
def INPUT_TYPES(s):
|
||||||
|
return {"required": {"voxel": ("VOXEL", ),
|
||||||
|
"algorithm": (["surface net", "basic"], ),
|
||||||
|
"threshold": ("FLOAT", {"default": 0.6, "min": -1.0, "max": 1.0, "step": 0.01}),
|
||||||
|
}}
|
||||||
|
RETURN_TYPES = ("MESH",)
|
||||||
|
FUNCTION = "decode"
|
||||||
|
|
||||||
|
CATEGORY = "3d"
|
||||||
|
|
||||||
|
def decode(self, voxel, algorithm, threshold):
|
||||||
|
vertices = []
|
||||||
|
faces = []
|
||||||
|
|
||||||
|
if algorithm == "basic":
|
||||||
|
mesh_function = voxel_to_mesh
|
||||||
|
elif algorithm == "surface net":
|
||||||
|
mesh_function = voxel_to_mesh_surfnet
|
||||||
|
|
||||||
|
for x in voxel.data:
|
||||||
|
v, f = mesh_function(x, threshold=threshold, device=None)
|
||||||
|
vertices.append(v)
|
||||||
|
faces.append(f)
|
||||||
|
|
||||||
|
return (MESH(torch.stack(vertices), torch.stack(faces)), )
|
||||||
|
|
||||||
|
|
||||||
def save_glb(vertices, faces, filepath, metadata=None):
|
def save_glb(vertices, faces, filepath, metadata=None):
|
||||||
"""
|
"""
|
||||||
@ -411,5 +629,6 @@ NODE_CLASS_MAPPINGS = {
|
|||||||
"Hunyuan3Dv2ConditioningMultiView": Hunyuan3Dv2ConditioningMultiView,
|
"Hunyuan3Dv2ConditioningMultiView": Hunyuan3Dv2ConditioningMultiView,
|
||||||
"VAEDecodeHunyuan3D": VAEDecodeHunyuan3D,
|
"VAEDecodeHunyuan3D": VAEDecodeHunyuan3D,
|
||||||
"VoxelToMeshBasic": VoxelToMeshBasic,
|
"VoxelToMeshBasic": VoxelToMeshBasic,
|
||||||
|
"VoxelToMesh": VoxelToMesh,
|
||||||
"SaveGLB": SaveGLB,
|
"SaveGLB": SaveGLB,
|
||||||
}
|
}
|
||||||
|
59
execution.py
59
execution.py
@ -15,7 +15,7 @@ import nodes
|
|||||||
import comfy.model_management
|
import comfy.model_management
|
||||||
from comfy_execution.graph import get_input_info, ExecutionList, DynamicPrompt, ExecutionBlocker
|
from comfy_execution.graph import get_input_info, ExecutionList, DynamicPrompt, ExecutionBlocker
|
||||||
from comfy_execution.graph_utils import is_link, GraphBuilder
|
from comfy_execution.graph_utils import is_link, GraphBuilder
|
||||||
from comfy_execution.caching import HierarchicalCache, LRUCache, CacheKeySetInputSignature, CacheKeySetID
|
from comfy_execution.caching import HierarchicalCache, LRUCache, DependencyAwareCache, CacheKeySetInputSignature, CacheKeySetID
|
||||||
from comfy_execution.validation import validate_node_input
|
from comfy_execution.validation import validate_node_input
|
||||||
|
|
||||||
class ExecutionResult(Enum):
|
class ExecutionResult(Enum):
|
||||||
@ -59,20 +59,27 @@ class IsChangedCache:
|
|||||||
self.is_changed[node_id] = node["is_changed"]
|
self.is_changed[node_id] = node["is_changed"]
|
||||||
return self.is_changed[node_id]
|
return self.is_changed[node_id]
|
||||||
|
|
||||||
class CacheSet:
|
|
||||||
def __init__(self, lru_size=None):
|
|
||||||
if lru_size is None or lru_size == 0:
|
|
||||||
self.init_classic_cache()
|
|
||||||
else:
|
|
||||||
self.init_lru_cache(lru_size)
|
|
||||||
self.all = [self.outputs, self.ui, self.objects]
|
|
||||||
|
|
||||||
# Useful for those with ample RAM/VRAM -- allows experimenting without
|
class CacheType(Enum):
|
||||||
# blowing away the cache every time
|
CLASSIC = 0
|
||||||
def init_lru_cache(self, cache_size):
|
LRU = 1
|
||||||
self.outputs = LRUCache(CacheKeySetInputSignature, max_size=cache_size)
|
DEPENDENCY_AWARE = 2
|
||||||
self.ui = LRUCache(CacheKeySetInputSignature, max_size=cache_size)
|
|
||||||
self.objects = HierarchicalCache(CacheKeySetID)
|
|
||||||
|
class CacheSet:
|
||||||
|
def __init__(self, cache_type=None, cache_size=None):
|
||||||
|
if cache_type == CacheType.DEPENDENCY_AWARE:
|
||||||
|
self.init_dependency_aware_cache()
|
||||||
|
logging.info("Disabling intermediate node cache.")
|
||||||
|
elif cache_type == CacheType.LRU:
|
||||||
|
if cache_size is None:
|
||||||
|
cache_size = 0
|
||||||
|
self.init_lru_cache(cache_size)
|
||||||
|
logging.info("Using LRU cache")
|
||||||
|
else:
|
||||||
|
self.init_classic_cache()
|
||||||
|
|
||||||
|
self.all = [self.outputs, self.ui, self.objects]
|
||||||
|
|
||||||
# Performs like the old cache -- dump data ASAP
|
# Performs like the old cache -- dump data ASAP
|
||||||
def init_classic_cache(self):
|
def init_classic_cache(self):
|
||||||
@ -80,6 +87,17 @@ class CacheSet:
|
|||||||
self.ui = HierarchicalCache(CacheKeySetInputSignature)
|
self.ui = HierarchicalCache(CacheKeySetInputSignature)
|
||||||
self.objects = HierarchicalCache(CacheKeySetID)
|
self.objects = HierarchicalCache(CacheKeySetID)
|
||||||
|
|
||||||
|
def init_lru_cache(self, cache_size):
|
||||||
|
self.outputs = LRUCache(CacheKeySetInputSignature, max_size=cache_size)
|
||||||
|
self.ui = LRUCache(CacheKeySetInputSignature, max_size=cache_size)
|
||||||
|
self.objects = HierarchicalCache(CacheKeySetID)
|
||||||
|
|
||||||
|
# only hold cached items while the decendents have not executed
|
||||||
|
def init_dependency_aware_cache(self):
|
||||||
|
self.outputs = DependencyAwareCache(CacheKeySetInputSignature)
|
||||||
|
self.ui = DependencyAwareCache(CacheKeySetInputSignature)
|
||||||
|
self.objects = DependencyAwareCache(CacheKeySetID)
|
||||||
|
|
||||||
def recursive_debug_dump(self):
|
def recursive_debug_dump(self):
|
||||||
result = {
|
result = {
|
||||||
"outputs": self.outputs.recursive_debug_dump(),
|
"outputs": self.outputs.recursive_debug_dump(),
|
||||||
@ -414,13 +432,14 @@ def execute(server, dynprompt, caches, current_item, extra_data, executed, promp
|
|||||||
return (ExecutionResult.SUCCESS, None, None)
|
return (ExecutionResult.SUCCESS, None, None)
|
||||||
|
|
||||||
class PromptExecutor:
|
class PromptExecutor:
|
||||||
def __init__(self, server, lru_size=None):
|
def __init__(self, server, cache_type=False, cache_size=None):
|
||||||
self.lru_size = lru_size
|
self.cache_size = cache_size
|
||||||
|
self.cache_type = cache_type
|
||||||
self.server = server
|
self.server = server
|
||||||
self.reset()
|
self.reset()
|
||||||
|
|
||||||
def reset(self):
|
def reset(self):
|
||||||
self.caches = CacheSet(self.lru_size)
|
self.caches = CacheSet(cache_type=self.cache_type, cache_size=self.cache_size)
|
||||||
self.status_messages = []
|
self.status_messages = []
|
||||||
self.success = True
|
self.success = True
|
||||||
|
|
||||||
@ -775,7 +794,7 @@ def validate_prompt(prompt):
|
|||||||
"details": f"Node ID '#{x}'",
|
"details": f"Node ID '#{x}'",
|
||||||
"extra_info": {}
|
"extra_info": {}
|
||||||
}
|
}
|
||||||
return (False, error, [], [])
|
return (False, error, [], {})
|
||||||
|
|
||||||
class_type = prompt[x]['class_type']
|
class_type = prompt[x]['class_type']
|
||||||
class_ = nodes.NODE_CLASS_MAPPINGS.get(class_type, None)
|
class_ = nodes.NODE_CLASS_MAPPINGS.get(class_type, None)
|
||||||
@ -786,7 +805,7 @@ def validate_prompt(prompt):
|
|||||||
"details": f"Node ID '#{x}'",
|
"details": f"Node ID '#{x}'",
|
||||||
"extra_info": {}
|
"extra_info": {}
|
||||||
}
|
}
|
||||||
return (False, error, [], [])
|
return (False, error, [], {})
|
||||||
|
|
||||||
if hasattr(class_, 'OUTPUT_NODE') and class_.OUTPUT_NODE is True:
|
if hasattr(class_, 'OUTPUT_NODE') and class_.OUTPUT_NODE is True:
|
||||||
outputs.add(x)
|
outputs.add(x)
|
||||||
@ -798,7 +817,7 @@ def validate_prompt(prompt):
|
|||||||
"details": "",
|
"details": "",
|
||||||
"extra_info": {}
|
"extra_info": {}
|
||||||
}
|
}
|
||||||
return (False, error, [], [])
|
return (False, error, [], {})
|
||||||
|
|
||||||
good_outputs = set()
|
good_outputs = set()
|
||||||
errors = []
|
errors = []
|
||||||
|
8
main.py
8
main.py
@ -156,7 +156,13 @@ def cuda_malloc_warning():
|
|||||||
|
|
||||||
def prompt_worker(q, server_instance):
|
def prompt_worker(q, server_instance):
|
||||||
current_time: float = 0.0
|
current_time: float = 0.0
|
||||||
e = execution.PromptExecutor(server_instance, lru_size=args.cache_lru)
|
cache_type = execution.CacheType.CLASSIC
|
||||||
|
if args.cache_lru > 0:
|
||||||
|
cache_type = execution.CacheType.LRU
|
||||||
|
elif args.cache_none:
|
||||||
|
cache_type = execution.CacheType.DEPENDENCY_AWARE
|
||||||
|
|
||||||
|
e = execution.PromptExecutor(server_instance, cache_type=cache_type, cache_size=args.cache_lru)
|
||||||
last_gc_collect = 0
|
last_gc_collect = 0
|
||||||
need_gc = False
|
need_gc = False
|
||||||
gc_collect_interval = 10.0
|
gc_collect_interval = 10.0
|
||||||
|
17
nodes.py
17
nodes.py
@ -786,6 +786,8 @@ class ControlNetLoader:
|
|||||||
def load_controlnet(self, control_net_name):
|
def load_controlnet(self, control_net_name):
|
||||||
controlnet_path = folder_paths.get_full_path_or_raise("controlnet", control_net_name)
|
controlnet_path = folder_paths.get_full_path_or_raise("controlnet", control_net_name)
|
||||||
controlnet = comfy.controlnet.load_controlnet(controlnet_path)
|
controlnet = comfy.controlnet.load_controlnet(controlnet_path)
|
||||||
|
if controlnet is None:
|
||||||
|
raise RuntimeError("ERROR: controlnet file is invalid and does not contain a valid controlnet model.")
|
||||||
return (controlnet,)
|
return (controlnet,)
|
||||||
|
|
||||||
class DiffControlNetLoader:
|
class DiffControlNetLoader:
|
||||||
@ -1690,6 +1692,9 @@ class LoadImage:
|
|||||||
if 'A' in i.getbands():
|
if 'A' in i.getbands():
|
||||||
mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
|
mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
|
||||||
mask = 1. - torch.from_numpy(mask)
|
mask = 1. - torch.from_numpy(mask)
|
||||||
|
elif i.mode == 'P' and 'transparency' in i.info:
|
||||||
|
mask = np.array(i.convert('RGBA').getchannel('A')).astype(np.float32) / 255.0
|
||||||
|
mask = 1. - torch.from_numpy(mask)
|
||||||
else:
|
else:
|
||||||
mask = torch.zeros((64,64), dtype=torch.float32, device="cpu")
|
mask = torch.zeros((64,64), dtype=torch.float32, device="cpu")
|
||||||
output_images.append(image)
|
output_images.append(image)
|
||||||
@ -2125,21 +2130,25 @@ def get_module_name(module_path: str) -> str:
|
|||||||
|
|
||||||
|
|
||||||
def load_custom_node(module_path: str, ignore=set(), module_parent="custom_nodes") -> bool:
|
def load_custom_node(module_path: str, ignore=set(), module_parent="custom_nodes") -> bool:
|
||||||
module_name = os.path.basename(module_path)
|
module_name = get_module_name(module_path)
|
||||||
if os.path.isfile(module_path):
|
if os.path.isfile(module_path):
|
||||||
sp = os.path.splitext(module_path)
|
sp = os.path.splitext(module_path)
|
||||||
module_name = sp[0]
|
module_name = sp[0]
|
||||||
|
sys_module_name = module_name
|
||||||
|
elif os.path.isdir(module_path):
|
||||||
|
sys_module_name = module_path.replace(".", "_x_")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
logging.debug("Trying to load custom node {}".format(module_path))
|
logging.debug("Trying to load custom node {}".format(module_path))
|
||||||
if os.path.isfile(module_path):
|
if os.path.isfile(module_path):
|
||||||
module_spec = importlib.util.spec_from_file_location(module_name, module_path)
|
module_spec = importlib.util.spec_from_file_location(sys_module_name, module_path)
|
||||||
module_dir = os.path.split(module_path)[0]
|
module_dir = os.path.split(module_path)[0]
|
||||||
else:
|
else:
|
||||||
module_spec = importlib.util.spec_from_file_location(module_name, os.path.join(module_path, "__init__.py"))
|
module_spec = importlib.util.spec_from_file_location(sys_module_name, os.path.join(module_path, "__init__.py"))
|
||||||
module_dir = module_path
|
module_dir = module_path
|
||||||
|
|
||||||
module = importlib.util.module_from_spec(module_spec)
|
module = importlib.util.module_from_spec(module_spec)
|
||||||
sys.modules[module_name] = module
|
sys.modules[sys_module_name] = module
|
||||||
module_spec.loader.exec_module(module)
|
module_spec.loader.exec_module(module)
|
||||||
|
|
||||||
LOADED_MODULE_DIRS[module_name] = os.path.abspath(module_dir)
|
LOADED_MODULE_DIRS[module_name] = os.path.abspath(module_dir)
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
comfyui-frontend-package==1.14.6
|
comfyui-frontend-package==1.15.13
|
||||||
torch
|
torch
|
||||||
torchsde
|
torchsde
|
||||||
torchvision
|
torchvision
|
||||||
|
10
server.py
10
server.py
@ -48,7 +48,7 @@ async def send_socket_catch_exception(function, message):
|
|||||||
@web.middleware
|
@web.middleware
|
||||||
async def cache_control(request: web.Request, handler):
|
async def cache_control(request: web.Request, handler):
|
||||||
response: web.Response = await handler(request)
|
response: web.Response = await handler(request)
|
||||||
if request.path.endswith('.js') or request.path.endswith('.css'):
|
if request.path.endswith('.js') or request.path.endswith('.css') or request.path.endswith('index.json'):
|
||||||
response.headers.setdefault('Cache-Control', 'no-cache')
|
response.headers.setdefault('Cache-Control', 'no-cache')
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@ -657,7 +657,13 @@ class PromptServer():
|
|||||||
logging.warning("invalid prompt: {}".format(valid[1]))
|
logging.warning("invalid prompt: {}".format(valid[1]))
|
||||||
return web.json_response({"error": valid[1], "node_errors": valid[3]}, status=400)
|
return web.json_response({"error": valid[1], "node_errors": valid[3]}, status=400)
|
||||||
else:
|
else:
|
||||||
return web.json_response({"error": "no prompt", "node_errors": []}, status=400)
|
error = {
|
||||||
|
"type": "no_prompt",
|
||||||
|
"message": "No prompt provided",
|
||||||
|
"details": "No prompt provided",
|
||||||
|
"extra_info": {}
|
||||||
|
}
|
||||||
|
return web.json_response({"error": error, "node_errors": {}}, status=400)
|
||||||
|
|
||||||
@routes.post("/queue")
|
@routes.post("/queue")
|
||||||
async def post_queue(request):
|
async def post_queue(request):
|
||||||
|
Loading…
Reference in New Issue
Block a user