add Dockerfile, compose.yml and modify README.md and .gitignore

This commit is contained in:
Sasser7 2024-10-21 23:40:30 +02:00
parent 83ca891118
commit e6e0747285
4 changed files with 62 additions and 3 deletions

1
.gitignore vendored
View File

@ -21,3 +21,4 @@ venv/
*.log *.log
web_custom_versions/ web_custom_versions/
.DS_Store .DS_Store
.env

11
Dockerfile Normal file
View File

@ -0,0 +1,11 @@
FROM python:3.12-slim
EXPOSE 8188
# Copy the repo and install required dependencies
WORKDIR /ComfyUI
COPY . .
RUN pip install --no-cache-dir -r requirements.txt
# ComfyUI entrypoint
WORKDIR /ComfyUI
CMD [ "python", "main.py" ]

View File

@ -42,14 +42,14 @@ This ui will let you design and execute advanced stable diffusion pipelines usin
- [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/) - [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/)
- Asynchronous Queue system - Asynchronous Queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions. - Many optimizations: Only re-executes the parts of the workflow that changes between executions.
- Smart memory management: can automatically run models on GPUs with as low as 1GB vram. - Smart memory management: can automatically run models on GPUs with as low as 1GB of VRAM.
- Works even if you don't have a GPU with: ```--cpu``` (slow) - Works even if you don't have a GPU with: ```--cpu``` (slow)
- Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs and CLIP models. - Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs and CLIP models.
- Embeddings/Textual inversion - Embeddings/Textual inversion
- [Loras (regular, locon and loha)](https://comfyanonymous.github.io/ComfyUI_examples/lora/) - [Loras (regular, locon and loha)](https://comfyanonymous.github.io/ComfyUI_examples/lora/)
- [Hypernetworks](https://comfyanonymous.github.io/ComfyUI_examples/hypernetworks/) - [Hypernetworks](https://comfyanonymous.github.io/ComfyUI_examples/hypernetworks/)
- Loading full workflows (with seeds) from generated PNG, WebP and FLAC files. - Loading full workflows (with seeds) from generated PNG, WebP and FLAC files.
- Saving/Loading workflows as Json files. - Saving/Loading workflows as JSON files.
- Nodes interface can be used to create complex workflows like one for [Hires fix](https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/) or much more advanced ones. - Nodes interface can be used to create complex workflows like one for [Hires fix](https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/) or much more advanced ones.
- [Area Composition](https://comfyanonymous.github.io/ComfyUI_examples/area_composition/) - [Area Composition](https://comfyanonymous.github.io/ComfyUI_examples/area_composition/)
- [Inpainting](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/) with both regular and inpainting models. - [Inpainting](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/) with both regular and inpainting models.
@ -66,6 +66,7 @@ This ui will let you design and execute advanced stable diffusion pipelines usin
- Starts up very fast. - Starts up very fast.
- Works fully offline: will never download anything. - Works fully offline: will never download anything.
- [Config file](extra_model_paths.yaml.example) to set the search paths for models. - [Config file](extra_model_paths.yaml.example) to set the search paths for models.
- [Docker](#running-with-docker) support
Workflow examples can be found on the [Examples page](https://comfyanonymous.github.io/ComfyUI_examples/) Workflow examples can be found on the [Examples page](https://comfyanonymous.github.io/ComfyUI_examples/)
@ -211,6 +212,20 @@ For 6700, 6600 and maybe other RDNA2 or older: ```HSA_OVERRIDE_GFX_VERSION=10.3.
For AMD 7600 and maybe other RDNA3 cards: ```HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py``` For AMD 7600 and maybe other RDNA3 cards: ```HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py```
# Running with Docker
Just execute the command below:
```docker compose --profile <your-profile> up --build -d```
## Profiles
```cpu``` - For CPU-only machines
```cuda``` - For NVIDIA CUDA Chips
## Additional variables for Docker (.env support)
```PORT``` - Port for ComfyUI
```LISTEN_IP``` - IP address for ComfyUI to listen
# Notes # Notes
Only parts of the graph that have an output with all the correct inputs will be executed. Only parts of the graph that have an output with all the correct inputs will be executed.
@ -296,4 +311,3 @@ This will use a snapshot of the legacy frontend preserved in the [ComfyUI Legacy
### Which GPU should I buy for this? ### Which GPU should I buy for this?
[See this page for some recommendations](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI) [See this page for some recommendations](https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI)

33
compose.yml Normal file
View File

@ -0,0 +1,33 @@
# ComfyUI common x-variable
x-comfyui-common: &comfyui-common
build:
context: .
image: comfyui
ports:
- "${PORT:-8188}:8188"
volumes:
- "./custom_nodes:/ComfyUI/custom_nodes"
- "./models:/ComfyUI/models"
- "./output:/ComfyUI/output"
# ComfyUI profiles
services:
comfyui-cpu:
<<: *comfyui-common
command: >
python main.py --listen=${LISTEN_IP:-0.0.0.0} --cpu
profiles:
- cpu
comfyui-cuda:
<<: *comfyui-common
command: >
python main.py --listen=${LISTEN_IP:-0.0.0.0}
profiles:
- cuda
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [ gpu ]