欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

简易教程:在本地2G显存和CPU环境下部署stable-diffusion-webui、NovelAI、Python和Git,以及CUDA的安装指南

最编程 2024-01-23 19:22:03
...

参考原文1:https://blog.****.net/weixin_62651190/article/details/127666631

参考原文2:https://blog.****.net/yefufeng/article/details/127719952

环境准备

Python:3.10.7+

Git

NovelAI模型

CUDA(下载不高于现显卡版本的)

 

一、安装Python

官网:https://www.python.org

如下图操作,找到需要的版本,下载python安装包:

下载安装包

红框这一列往下拉,找到需要的版本的python(稳定版)(右边是预发布版)

如图,点击下载exe安装程序:

双击打开安装程序,勾选 Add Python 3.10 to PATH,然后点击上面红框安装。(自定义安装到d盘可能运行批处理程序会出问题)

选择安装路径到D盘后一直返回到最初安装页面(直接安装后面运行webui-user.bat文件可能会有问题)

 验证安装

在 C:\Users\XXXX\AppData\Roaming下新建pip文件夹,在pip文件夹内新建pip.ini文件,并输入以下内容保存(pip使用清华镜像仓库)。

1 [global]
2 timeout = 60000
3 index-url = https://pypi.tuna.tsinghua.edu.cn/simple
4 5 [install]
6 use-mirrors = true
7 mirrors = https://pypi.tuna.tsinghua.edu.cn

二、安装CUDA

下载对应的安装包

桌面>>右键>>NAVIDIA控制面板 打开如下:

点击上图 系统信息,在打开的界面点击 组件,记住这个版本,下载的CUDA不要高于这个版本

CUDA下载地址:

https://developer.nvidia.com/cuda-toolkit-archive
点击运行程序,基本点击确定和点击下一步就行

三、安装Git

后面一直点击下一步就行

 验证安装

四、安装dev-sidecar

 内含NovelAI除ckpt和pt文件外的其他文件,ckpt文件太大,没超会上传不了。(自行百度搜索NovelAI模型的相关分享应该也是可以用的,或者可以直接用下面种子链接下载)

链接:https://pan.baidu.com/s/1UxYM2votxxFI0GnPhh9_zw 
提取码:hte5

exe傻瓜式安装,安装进入程序按照提示安装证书即可。

五、拉取stable-diffusion-webui

打开dev-sidecar,如下图设置

在d盘新建一个文件夹,拉取项目放这

在空白处右键,选择 git bash here,如下输入拉取项目(复制后点击鼠标右键>>paste粘贴后回车)

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui

 六、下载GFPGANv1.4.pth

下载慢的可以在上面百度云盘分享链接下

https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth

七、下载NovelAI模型并复制所需文件到相应位置

NovelAInovelaileak(50g))模型,磁力链接:

magnet:?xt=urn:btih:5bde442da86265b670a3e5ea3163afad2c6f8ecc&dn=novelaileak

复制或移动下载好的NovelAI模型到项目文件夹 stable-diffusion-webui

1、复制GFPGANv1.4.pth 到 stable-diffusion-webui 根目录。

2、复制novelaileak\stableckpt\animefull-latest\model.ckpt 到 stable-diffusion-webui\models\Stable-diffusion目录下,并改名为novel-ai.ckpt。

3、复制novelaileak\stableckpt\animefull-latest\config.yaml 到 stable-diffusion-webui\models\Stable-diffusion目录下,并改名为novel-ai.yaml 。

4、复制novelaileak\stableckpt\animevae.pt 到 stable-diffusion-webui\models\Stable-diffusion目录下,并改名为novel-ai.vae.pt 。

5、复制novelaileak\stableckpt\modules\modules下的所有文件 到 stable-diffusion-webui\models\hypernetworks目录下,如果hypernetworks目录不存在,新建文件夹即可。

 八、编辑webui-user.bat保存并运行

stable-diffusion-webui 根目录找到webui-user.bat文件,右键选中编辑

1 @echo off
2 3 set PYTHON=D:\Programs\Python\Python310\python.exe
4 set GIT=
5 set VENV_DIR=
6 set COMMANDLINE_ARGS=--ckpt .\models\Stable-diffusion\novel-ai.ckpt --lowvram --always-batch-cond-uncond --precision full --no-half --opt-split-attention-v1 --use-cpu sd --autolaunch
7 8 call webui.bat

在set PYTHON=这行代码后添加的是python安装路径,命令提示符下输入where python后回车可得到。

在 set COMMANDLINE_ARGS= 这行代码后:

2G 显存的增加 --lowvram ,再输入空格然后把下面这段加上

--always-batch-cond-uncond --precision full --no-half --opt-split-attention-v1 --use-cpu sd --autolaunch

4G 显存的增加 --medvram

保存后双击运行webui-user.bat,提示Running on local URL时成功,会自动打开浏览器(或手动打开127.0.0.1:7860),结果如下:

 ps:#后面再运行不需要再开dev-sidecar  

#使用cpu运行,生成图片需要5-10分钟,期间cpu会长时间拉满,电脑可能会有点卡

#请勿短时间内连续过多次使用,以免cpu长时间过热损坏cpu

#如遇到某些bug可以参考:https://blog.****.net/yefufeng/article/details/127719952

#备忘

  1 (novelai) E:\workspace\02_Python\novalai\stable-diffusion-webui>python launch.py -h
  2 Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
  3 Commit hash: b8f2dfed3c0085f1df359b9dc5b3841ddc2196f0
  4 Installing requirements for Web UI
  5 Launching Web UI with arguments: -h
  6 usage: launch.py [-h] [--config CONFIG] [--ckpt CKPT] [--ckpt-dir CKPT_DIR] [--gfpgan-dir GFPGAN_DIR]
  7                  [--gfpgan-model GFPGAN_MODEL] [--no-half] [--no-half-vae] [--no-progressbar-hiding]
  8                  [--max-batch-count MAX_BATCH_COUNT] [--embeddings-dir EMBEDDINGS_DIR]
  9                  [--hypernetwork-dir HYPERNETWORK_DIR] [--localizations-dir LOCALIZATIONS_DIR] [--allow-code]
 10                  [--medvram] [--lowvram] [--lowram] [--always-batch-cond-uncond] [--unload-gfpgan]
 11                  [--precision {full,autocast}] [--share] [--ngrok NGROK] [--ngrok-region NGROK_REGION]
 12                  [--enable-insecure-extension-access] [--codeformer-models-path CODEFORMER_MODELS_PATH]
 13                  [--gfpgan-models-path GFPGAN_MODELS_PATH] [--esrgan-models-path ESRGAN_MODELS_PATH]
 14                  [--bsrgan-models-path BSRGAN_MODELS_PATH] [--realesrgan-models-path REALESRGAN_MODELS_PATH]
 15                  [--scunet-models-path SCUNET_MODELS_PATH] [--swinir-models-path SWINIR_MODELS_PATH]
 16                  [--ldsr-models-path LDSR_MODELS_PATH] [--clip-models-path CLIP_MODELS_PATH] [--xformers]
 17                  [--force-enable-xformers] [--deepdanbooru] [--opt-split-attention] [--opt-split-attention-invokeai]
 18                  [--opt-split-attention-v1] [--disable-opt-split-attention]
 19                  [--use-cpu {all,sd,interrogate,gfpgan,swinir,esrgan,scunet,codeformer} [{all,sd,interrogate,gfpgan,swinir,esrgan,scunet,codeformer} ...]]
 20                  [--listen] [--port PORT] [--show-negative-prompt] [--ui-config-file UI_CONFIG_FILE]
 21                  [--hide-ui-dir-config] [--freeze-settings] [--ui-settings-file UI_SETTINGS_FILE] [--gradio-debug]
 22                  [--gradio-auth GRADIO_AUTH] [--gradio-img2img-tool {color-sketch,editor}] [--opt-channelslast]
 23                  [--styles-file STYLES_FILE] [--autolaunch] [--theme THEME] [--use-textbox-seed]
 24                  [--disable-console-progressbars] [--enable-console-prompts] [--vae-path VAE_PATH]
 25                  [--disable-safe-unpickle] [--api] [--nowebui] [--ui-debug-mode] [--device-id DEVICE_ID]
 26                  [--administrator] [--cors-allow-origins CORS_ALLOW_ORIGINS] [--tls-keyfile TLS_KEYFILE]
 27                  [--tls-certfile TLS_CERTFILE] [--server-name SERVER_NAME]
 28 
 29 options:
 30   -h, --help            show this help message and exit
 31   --config CONFIG       path to config which constructs model
 32   --ckpt CKPT           path to checkpoint of stable diffusion model; if specified, this checkpoint will be added to
 33                         the list of checkpoints and loaded
 34   --ckpt-dir CKPT_DIR   Path to directory with stable diffusion checkpoints
 35   --gfpgan-dir GFPGAN_DIR
 36                         GFPGAN directory
 37   --gfpgan-model GFPGAN_MODEL
 38                         GFPGAN model file name
 39   --no-half             do not switch the model to 16-bit floats
 40   --no-half-vae         do not switch the VAE model to 16-bit floats
 41   --no-progressbar-hiding
 42                         do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware
 43                         acceleration in browser)
 44   --max-batch-count MAX_BATCH_COUNT
 45                         maximum batch count value for the UI
 46   --embeddings-dir EMBEDDINGS_DIR
 47                         embeddings directory for textual inversion (default: embeddings)
 48   --hypernetwork-dir HYPERNETWORK_DIR
 49                         hypernetwork directory
 50   --localizations-dir LOCALIZATIONS_DIR
 51                         localizations directory
 52   --allow-code          allow custom script execution from webui
 53   --medvram             enable stable diffusion model optimizations for sacrificing a little speed for low VRM usage
 54   --lowvram             enable stable diffusion model optimizations for sacrificing a lot of speed for very low VRM
 55                         usage
 56   --lowram              load stable diffusion checkpoint weights to VRAM instead of RAM
 57   --always-batch-cond-uncond
 58                         disables cond/uncond batching that is enabled to save memory with --medvram or --lowvram
 59   --unload-gfpgan       does not do anything.
 60   --precision {full,autocast}
 61                         evaluate at this precision
 62   --share               use share=True for gradio and make the UI accessible through their site
 63   --ngrok NGROK         ngrok authtoken, alternative to gradio --share
 64   --ngrok-region NGROK_REGION
 65                         The region in which ngrok should start.
 66   --enable-insecure-extension-access
 67                         enable extensions tab regardless of other options
 68   --codeformer-models-path CODEFORMER_MODELS_PATH
 69                         Path to directory with codeformer model file(s).
 70   --gfpgan-models-path GFPGAN_MODELS_PATH
 71                         Path to directory with GFPGAN model file(s).
 72   --esrgan-models-path ESRGAN_MODELS_PATH
 73                         Path to directory with ESRGAN model file(s).
 74   --bsrgan-models-path BSRGAN_MODELS_PATH
 75                         Path to directory with BSRGAN model file(s).
 76   --realesrgan-models-path REALESRGAN_MODELS_PATH
 77                         Path to directory with RealESRGAN model file(s).
 78   --scunet-models-path SCUNET_MODELS_PATH
 79                         Path to directory with ScuNET model file(s).
 80   --swinir-models-path SWINIR_MODELS_PATH
 81                         Path to directory with SwinIR model file(s).
 82   --ldsr-models-path LDSR_MODELS_PATH
 83                         Path to directory with LDSR model file(s).
 84   --clip-models-path CLIP_MODELS_PATH
 85                         Path to directory with CLIP model file(s).
 86   --xformers            enable xformers for cross attention layers
 87   --force-enable-xformers
 88                         enable xformers for cross attention layers regardless of whether the checking code thinks you
 89                         can run it; do not make bug reports if this fails to work
 90   --deepdanbooru        enable deepdanbooru interrogator
 91   --opt-split-attention
 92                         force-enables Doggettx's cross-attention layer optimization. By default, it's on for torch
 93                         cuda.
 94   --opt-split-attention-invokeai
 95                         force-enables InvokeAI's cross-attention layer optimization. By default, it's on when cuda is
 96                         unavailable.
 97   --opt-split-attention-v1
 98                         enable older version of split attention optimization that does not consume all the VRAM it can
 99                         find
100   --disable-opt-split-attention
101                         force-disables cross-attention layer optimization
102   --use-cpu {all,sd,interrogate,gfpgan,swinir,esrgan,scunet,codeformer} [{all,sd,interrogate,gfpgan,swinir,esrgan,scunet,codeformer} ...]
103                         use CPU as torch device for specified modules
104   --listen              launch gradio with 0.0.0.0 as server name, allowing to respond to network requests
105   --port PORT           launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to
106                         7860 if available
107   --show-negative-prompt
108                         does not do anything
109   --ui-config-file UI_CONFIG_FILE
110                         filename to use for ui configuration
111   --hide-ui-dir-config  hide directory configuration from webui
112   --freeze-settings     disable editing settings
113   --ui-settings-file UI_SETTINGS_FILE
114                         filename to use for ui settings
115   --gradio-debug        launch gradio with --debug option
116   --gradio-auth GRADIO_AUTH
117                         set gradio authentication like "username:password"; or comma-delimit multiple like
118                         "u1:p1,u2:p2,u3:p3"
119   --gradio-img2img-tool {color-sketch,editor}
120                         gradio image uploader tool: can be either editor for ctopping, or color-sketch for drawing
121   --opt-channelslast    change memory type for stable diffusion to channels last
122   --styles-file STYLES_FILE
123                         filename to use for styles
124   --autolaunch          open the webui URL in the system's default browser upon launch
125   --theme THEME         launches the UI with light or dark theme
126   --use-textbox-seed    use textbox for seeds in UI (no up/down, but possible to input long seeds)
127   --disable-console-progressbars
128                         do not output progressbars to console
129   --enable-console-prompts
130                         print prompts to console when generating with txt2img and img2img
131   --vae-path VAE_PATH   Path to Variational Autoencoders model
132   --disable-safe-unpickle
133                         disable checking pytorch models for malicious code
134   --api                 use api=True to launch the api with the webui
135   --nowebui             use api=True to launch the api instead of the webui
136   --ui-debug-mode       Don't load model to quickly launch UI
137   --device-id DEVICE_ID
138                         Select the default CUDA device to use (export CUDA_VISIBLE_DEVICES=0,1,etc might be needed
139                         before)
140   --administrator       Administrator rights
141   --cors-allow-origins CORS_ALLOW_ORIGINS
142                         Allowed CORS origins
143   --tls-keyfile TLS_KEYFILE
144                         Partially enables TLS, requires --tls-certfile to fully function
145   --tls-certfile TLS_CERTFILE
146                         Partially enables TLS, requires --tls-keyfile to fully function
147   --server-name SERVER_NAME
148                         Sets hostname of server
 

原文地址:https://www.cnblogs.com/zhongbenbayun/p/16879362.html