From 9ce300ab59049dd3baf6872cda2c816a6ae967b2 Mon Sep 17 00:00:00 2001 From: comfyanonymous Date: Thu, 9 Mar 2023 11:40:24 -0500 Subject: [PATCH] I confirmed CPU only inference works fine on a pytorch without cuda. --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 2b1bccc5..2e095fb7 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,8 @@ This ui will let you design and execute advanced stable diffusion pipelines usin - Fully supports SD1.x and SD2.x - Asynchronous Queue system - Many optimizations: Only re-executes the parts of the workflow that changes between executions. -- Command line option: ```--lowvram``` to make it work on GPUs with less than 3GB vram. +- Command line option: ```--lowvram``` to make it work on GPUs with less than 3GB vram (enabled automatically on GPUs with low vram) +- Works even if you don't have a GPU with: ```--cpu``` (slow) - Can load both ckpt and safetensors models/checkpoints. Standalone VAEs and CLIP models. - Embeddings/Textual inversion - Loras