Stable Diffusion使用新模型时lowvram问题

问题描述

在WebUI上选择从网上下载的*.safetensors新模型后,出现如下报错:

Applying attention optimization: sub-quadratic... done.
Model loaded in 16.2s (load weights from disk: 0.5s, create model: 0.9s, apply weights to model: 13.4s, apply dtype to VAE: 0.2s, move model to device: 0.1s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.8s).
Reusing loaded model dreamshaper_8.safetensors [879db523c3] to load AnythingXL_xl.safetensors [8421598e93]
Loading weights [8421598e93] from /Users/couldhll/Desktop/stable-diffusion-webui/models/Stable-diffusion/AnythingXL_xl.safetensors
Creating model from config: /Users/couldhll/Desktop/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
changing setting sd_model_checkpoint to AnythingXL_xl.safetensors [8421598e93]: RuntimeError
Traceback (most recent call last):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 879, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 751, in load_model
    send_model_to_device(sd_model)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 681, in send_model_to_device
    m.to(shared.device)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in to
    return self._apply(convert)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
    module._apply(fn)
  [Previous line repeated 2 more times]
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 833, in _apply
    param_applied = fn(param)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: MPS backend out of memory (MPS allocated: 6.51 GB, other allocations: 243.82 MB, max allowed: 6.80 GB). Tried to allocate 120.62 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).

*** Error completing request
*** Arguments: ('task(g4jhu75lk3xb84y)', <gradio.routes.Request object at 0x169974070>, 'a muscular female,muscles in the legs, bra,T-back,brown skin, short hair', '', [], 1, 1, 9, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/processing.py", line 832, in process_images
        sd_models.reload_model_weights()
      File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
        sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
      File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
        send_model_to_cpu(sd_model)
      File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
        if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

问题解决

运行webui时候带参数lowvram:./webui.sh --lowvram
或者
在webui-user.sh中找到#export COMMANDLINE_ARGS=""行改为:export COMMANDLINE_ARGS="--lowvram --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"

PS:因为本机系统是macos,所以需要带有macos特有的参数:--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate是从webui-macos-env.sh文件中(export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate")复制过来的

Stable Diffusion使用新模型时报缺失clip-vit-large-patch14

问题描述

在WebUI上选择从网上下载的*.safetensors新模型后,出现如下报错:

Creating model from config: /Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/configs/stable-diffusion/v2-inference-v.yaml
Applying attention optimization: sub-quadratic... done.
Model loaded in 35.1s (load weights from disk: 10.0s, find config: 11.5s, create model: 0.2s, apply weights to model: 9.0s, apply half(): 2.0s, move model to device: 0.2s, calculate empty prompt: 2.0s).
Reusing loaded model v2-1_768-ema-pruned.ckpt [ad2a33c361] to load dreamshaper_8.safetensors [879db523c3]
Loading weights [879db523c3] from /Users/couldhll/Desktop/stable-diffusion-webui/models/Stable-diffusion/dreamshaper_8.safetensors
Creating model from config: /Users/couldhll/Desktop/stable-diffusion-webui/configs/v1-inference.yaml
/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
creating model quickly: OSError
Traceback (most recent call last):
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/ui_settings.py", line 316, in <lambda>
    fn=lambda value, k=k: self.run_settings_single(value, key=k),
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/ui_settings.py", line 95, in run_settings_single
    if value is None or not opts.set(key, value):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 879, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__
    self.tokenizer = CLIPTokenizer.from_pretrained(version)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Failed to create model quickly; will retry using slow method.
changing setting sd_model_checkpoint to dreamshaper_8.safetensors [879db523c3]: OSError
Traceback (most recent call last):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 879, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 732, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__
    self.tokenizer = CLIPTokenizer.from_pretrained(version)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

问题分析

stable-diffusion-webui/openai/clip-vit-large-patch14目录下没有clip-vit-large-patch14

问题解决

mkdir openai
cd openai
git clone https://huggingface.co/openai/clip-vit-large-patch14.git

PS:使用git clone的时候遇到Couldn’t connect to server问题

解决Git的Couldn’t connect to server问题

问题描述

下载openai/clip-vit-large-patch14,使用命令git clone https://huggingface.co/openai/clip-vit-large-patch14时,报如下错误:

Cloning into 'clip-vit-large-patch14'...
fatal: unable to access 'https://huggingface.co/openai/clip-vit-large-patch14/': Failed to connect to huggingface.co port 443 after 75003 ms: Couldn't connect to server

问题解决

huggingface.co需要科学上网,使用本机代理http://127.0.0.1:8118,运行命令解决:
git clone https://huggingface.co/openai/clip-vit-large-patch14.git -c http.proxy="http://127.0.0.1:8118"

安装Stable Diffusion时的pip网络问题

问题描述

安装stable-diffusion-webui,运行./webui.sh时候报错:

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on couldhll user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.3.9.4)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Installing requirements
Traceback (most recent call last):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/launch.py", line 48, in <module>
    main()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/launch.py", line 39, in main
    prepare_environment()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/launch_utils.py", line 422, in prepare_environment
    run_pip(f"install -r \"{requirements_file}\"", "requirements")
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/launch_utils.py", line 143, in run_pip
    return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/launch_utils.py", line 115, in run
    raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements.
Command: "/Users/couldhll/Desktop/stable-diffusion-webui/venv/bin/python3.10" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 2
stdout: Collecting setuptools==69.5.1 (from -r requirements_versions.txt (line 1))
  Using cached setuptools-69.5.1-py3-none-any.whl.metadata (6.2 kB)
Collecting GitPython==3.1.32 (from -r requirements_versions.txt (line 2))
  Using cached GitPython-3.1.32-py3-none-any.whl.metadata (10.0 kB)
Collecting Pillow==9.5.0 (from -r requirements_versions.txt (line 3))
  Using cached Pillow-9.5.0-cp310-cp310-macosx_10_10_x86_64.whl.metadata (9.5 kB)
Collecting accelerate==0.21.0 (from -r requirements_versions.txt (line 4))
  Using cached accelerate-0.21.0-py3-none-any.whl.metadata (17 kB)
Collecting blendmodes==2022 (from -r requirements_versions.txt (line 5))
  Using cached blendmodes-2022-py3-none-any.whl.metadata (12 kB)
Collecting clean-fid==0.1.35 (from -r requirements_versions.txt (line 6))
  Using cached clean_fid-0.1.35-py3-none-any.whl.metadata (36 kB)
Collecting diskcache==5.6.3 (from -r requirements_versions.txt (line 7))
  Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Collecting einops==0.4.1 (from -r requirements_versions.txt (line 8))
  Using cached einops-0.4.1-py3-none-any.whl.metadata (10 kB)
Collecting facexlib==0.3.0 (from -r requirements_versions.txt (line 9))
  Using cached facexlib-0.3.0-py3-none-any.whl.metadata (4.6 kB)
Collecting fastapi==0.94.0 (from -r requirements_versions.txt (line 10))
  Using cached fastapi-0.94.0-py3-none-any.whl.metadata (25 kB)
Collecting gradio==3.41.2 (from -r requirements_versions.txt (line 11))
  Using cached gradio-3.41.2-py3-none-any.whl.metadata (17 kB)
Collecting httpcore==0.15 (from -r requirements_versions.txt (line 12))
  Using cached httpcore-0.15.0-py3-none-any.whl.metadata (15 kB)
Collecting inflection==0.5.1 (from -r requirements_versions.txt (line 13))
  Using cached inflection-0.5.1-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting jsonmerge==1.8.0 (from -r requirements_versions.txt (line 14))
  Using cached jsonmerge-1.8.0.tar.gz (26 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Installing backend dependencies: started
  Installing backend dependencies: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Collecting kornia==0.6.7 (from -r requirements_versions.txt (line 15))
  Using cached kornia-0.6.7-py2.py3-none-any.whl.metadata (12 kB)
Collecting lark==1.1.2 (from -r requirements_versions.txt (line 16))
  Using cached lark-1.1.2-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting numpy==1.26.2 (from -r requirements_versions.txt (line 17))
  Using cached numpy-1.26.2-cp310-cp310-macosx_10_9_x86_64.whl.metadata (61 kB)
Collecting omegaconf==2.2.3 (from -r requirements_versions.txt (line 18))
  Using cached omegaconf-2.2.3-py3-none-any.whl.metadata (3.9 kB)
Collecting open-clip-torch==2.20.0 (from -r requirements_versions.txt (line 19))
  Using cached open_clip_torch-2.20.0-py3-none-any.whl.metadata (46 kB)
Collecting piexif==1.1.3 (from -r requirements_versions.txt (line 20))
  Using cached piexif-1.1.3-py2.py3-none-any.whl.metadata (3.7 kB)
Collecting psutil==5.9.5 (from -r requirements_versions.txt (line 21))
  Using cached psutil-5.9.5-cp36-abi3-macosx_10_9_x86_64.whl.metadata (21 kB)
Collecting pytorch_lightning==1.9.4 (from -r requirements_versions.txt (line 22))
  Using cached pytorch_lightning-1.9.4-py3-none-any.whl.metadata (22 kB)
Collecting resize-right==0.0.2 (from -r requirements_versions.txt (line 23))
  Using cached resize_right-0.0.2-py3-none-any.whl.metadata (551 bytes)
Collecting safetensors==0.4.2 (from -r requirements_versions.txt (line 24))
  Using cached safetensors-0.4.2-cp310-cp310-macosx_10_12_x86_64.whl.metadata (3.8 kB)
Collecting scikit-image==0.21.0 (from -r requirements_versions.txt (line 25))
  Using cached scikit_image-0.21.0-cp310-cp310-macosx_10_9_x86_64.whl.metadata (14 kB)
Collecting spandrel==0.1.6 (from -r requirements_versions.txt (line 26))
  Using cached spandrel-0.1.6-py3-none-any.whl.metadata (12 kB)
Collecting tomesd==0.1.3 (from -r requirements_versions.txt (line 27))
  Using cached tomesd-0.1.3-py3-none-any.whl.metadata (9.1 kB)
Requirement already satisfied: torch in ./venv/lib/python3.10/site-packages (from -r requirements_versions.txt (line 28)) (2.1.0)
Collecting torchdiffeq==0.2.3 (from -r requirements_versions.txt (line 29))
  Using cached torchdiffeq-0.2.3-py3-none-any.whl.metadata (488 bytes)
Collecting torchsde==0.2.6 (from -r requirements_versions.txt (line 30))
  Using cached torchsde-0.2.6-py3-none-any.whl.metadata (5.3 kB)
Collecting transformers==4.30.2 (from -r requirements_versions.txt (line 31))
  Using cached transformers-4.30.2-py3-none-any.whl.metadata (113 kB)
Collecting httpx==0.24.1 (from -r requirements_versions.txt (line 32))
  Using cached httpx-0.24.1-py3-none-any.whl.metadata (7.4 kB)
Collecting pillow-avif-plugin==1.4.3 (from -r requirements_versions.txt (line 33))
  Using cached pillow_avif_plugin-1.4.3-cp310-cp310-macosx_10_10_x86_64.whl.metadata (1.7 kB)
Collecting gitdb<5,>=4.0.1 (from GitPython==3.1.32->-r requirements_versions.txt (line 2))
  Using cached gitdb-4.0.11-py3-none-any.whl.metadata (1.2 kB)
Requirement already satisfied: packaging>=20.0 in ./venv/lib/python3.10/site-packages (from accelerate==0.21.0->-r requirements_versions.txt (line 4)) (24.0)
Requirement already satisfied: pyyaml in ./venv/lib/python3.10/site-packages (from accelerate==0.21.0->-r requirements_versions.txt (line 4)) (6.0.1)
Collecting aenum<4,>=3.1.7 (from blendmodes==2022->-r requirements_versions.txt (line 5))
  Using cached aenum-3.1.15-py3-none-any.whl.metadata (3.7 kB)
Collecting deprecation<3,>=2.1.0 (from blendmodes==2022->-r requirements_versions.txt (line 5))
  Using cached deprecation-2.1.0-py2.py3-none-any.whl.metadata (4.6 kB)
Requirement already satisfied: torchvision in ./venv/lib/python3.10/site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (0.16.0)
Collecting scipy>=1.0.1 (from clean-fid==0.1.35->-r requirements_versions.txt (line 6))
  Using cached scipy-1.13.1-cp310-cp310-macosx_10_9_x86_64.whl.metadata (60 kB)
Requirement already satisfied: tqdm>=4.28.1 in ./venv/lib/python3.10/site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (4.66.4)
Requirement already satisfied: requests in ./venv/lib/python3.10/site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (2.32.3)
Collecting filterpy (from facexlib==0.3.0->-r requirements_versions.txt (line 9))
  Using cached filterpy-1.4.5.zip (177 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Installing backend dependencies: started
  Installing backend dependencies: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Collecting numba (from facexlib==0.3.0->-r requirements_versions.txt (line 9))
  Using cached numba-0.59.1-cp310-cp310-macosx_10_9_x86_64.whl.metadata (2.7 kB)
Collecting opencv-python (from facexlib==0.3.0->-r requirements_versions.txt (line 9))
  Using cached opencv_python-4.10.0.82-cp37-abi3-macosx_12_0_x86_64.whl.metadata (20 kB)
Collecting pydantic!=1.7,!=1.7.1,!=1.7.2,!=1.7.3,!=1.8,!=1.8.1,<2.0.0,>=1.6.2 (from fastapi==0.94.0->-r requirements_versions.txt (line 10))
  Using cached pydantic-1.10.15-cp310-cp310-macosx_10_9_x86_64.whl.metadata (150 kB)
Collecting starlette<0.27.0,>=0.26.0 (from fastapi==0.94.0->-r requirements_versions.txt (line 10))
  Using cached starlette-0.26.1-py3-none-any.whl.metadata (5.8 kB)
Collecting aiofiles<24.0,>=22.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached aiofiles-23.2.1-py3-none-any.whl.metadata (9.7 kB)
Collecting altair<6.0,>=4.2.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached altair-5.3.0-py3-none-any.whl.metadata (9.2 kB)
Collecting ffmpy (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached ffmpy-0.3.2.tar.gz (5.5 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Installing backend dependencies: started
  Installing backend dependencies: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Collecting gradio-client==0.5.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached gradio_client-0.5.0-py3-none-any.whl.metadata (7.1 kB)
Requirement already satisfied: huggingface-hub>=0.14.0 in ./venv/lib/python3.10/site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 11)) (0.23.3)
Collecting importlib-resources<7.0,>=1.3 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached importlib_resources-6.4.0-py3-none-any.whl.metadata (3.9 kB)
Requirement already satisfied: jinja2<4.0 in ./venv/lib/python3.10/site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 11)) (3.1.4)
Requirement already satisfied: markupsafe~=2.0 in ./venv/lib/python3.10/site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 11)) (2.1.5)
Collecting matplotlib~=3.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached matplotlib-3.9.0-cp310-cp310-macosx_10_12_x86_64.whl.metadata (11 kB)
Collecting orjson~=3.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached orjson-3.10.3-cp310-cp310-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl.metadata (49 kB)
Collecting pandas<3.0,>=1.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached pandas-2.2.2-cp310-cp310-macosx_10_9_x86_64.whl.metadata (19 kB)
Collecting pydub (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached pydub-0.25.1-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting python-multipart (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached python_multipart-0.0.9-py3-none-any.whl.metadata (2.5 kB)
Collecting semantic-version~=2.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached semantic_version-2.10.0-py2.py3-none-any.whl.metadata (9.7 kB)
Requirement already satisfied: typing-extensions~=4.0 in ./venv/lib/python3.10/site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 11)) (4.12.2)
Collecting uvicorn>=0.14.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached uvicorn-0.30.1-py3-none-any.whl.metadata (6.3 kB)
Collecting websockets<12.0,>=10.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached websockets-11.0.3-cp310-cp310-macosx_10_9_x86_64.whl.metadata (6.6 kB)
Collecting h11<0.13,>=0.11 (from httpcore==0.15->-r requirements_versions.txt (line 12))
  Using cached h11-0.12.0-py3-none-any.whl.metadata (8.1 kB)
Collecting sniffio==1.* (from httpcore==0.15->-r requirements_versions.txt (line 12))
  Using cached sniffio-1.3.1-py3-none-any.whl.metadata (3.9 kB)
Collecting anyio==3.* (from httpcore==0.15->-r requirements_versions.txt (line 12))
  Using cached anyio-3.7.1-py3-none-any.whl.metadata (4.7 kB)
Requirement already satisfied: certifi in ./venv/lib/python3.10/site-packages (from httpcore==0.15->-r requirements_versions.txt (line 12)) (2024.6.2)
Collecting jsonschema (from jsonmerge==1.8.0->-r requirements_versions.txt (line 14))
  Using cached jsonschema-4.22.0-py3-none-any.whl.metadata (8.2 kB)
Collecting antlr4-python3-runtime==4.9.* (from omegaconf==2.2.3->-r requirements_versions.txt (line 18))
  Using cached antlr4-python3-runtime-4.9.3.tar.gz (117 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Installing backend dependencies: started
  Installing backend dependencies: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: regex in ./venv/lib/python3.10/site-packages (from open-clip-torch==2.20.0->-r requirements_versions.txt (line 19)) (2024.5.15)
Requirement already satisfied: ftfy in ./venv/lib/python3.10/site-packages (from open-clip-torch==2.20.0->-r requirements_versions.txt (line 19)) (6.2.0)
Requirement already satisfied: sentencepiece in ./venv/lib/python3.10/site-packages (from open-clip-torch==2.20.0->-r requirements_versions.txt (line 19)) (0.2.0)
Requirement already satisfied: protobuf<4 in ./venv/lib/python3.10/site-packages (from open-clip-torch==2.20.0->-r requirements_versions.txt (line 19)) (3.20.0)
Collecting timm (from open-clip-torch==2.20.0->-r requirements_versions.txt (line 19))
  Using cached timm-1.0.3-py3-none-any.whl.metadata (43 kB)
Requirement already satisfied: fsspec>2021.06.0 in ./venv/lib/python3.10/site-packages (from fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22)) (2024.6.0)
Collecting torchmetrics>=0.7.0 (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22))
  Using cached torchmetrics-1.4.0.post0-py3-none-any.whl.metadata (19 kB)
Collecting lightning-utilities>=0.6.0.post0 (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22))
  Using cached lightning_utilities-0.11.2-py3-none-any.whl.metadata (4.7 kB)
Requirement already satisfied: networkx>=2.8 in ./venv/lib/python3.10/site-packages (from scikit-image==0.21.0->-r requirements_versions.txt (line 25)) (3.3)
Collecting imageio>=2.27 (from scikit-image==0.21.0->-r requirements_versions.txt (line 25))
  Using cached imageio-2.34.1-py3-none-any.whl.metadata (4.9 kB)
Collecting tifffile>=2022.8.12 (from scikit-image==0.21.0->-r requirements_versions.txt (line 25))
  Using cached tifffile-2024.5.22-py3-none-any.whl.metadata (30 kB)
Collecting PyWavelets>=1.1.1 (from scikit-image==0.21.0->-r requirements_versions.txt (line 25))
  Using cached pywavelets-1.6.0-cp310-cp310-macosx_10_9_x86_64.whl.metadata (9.0 kB)
Collecting lazy_loader>=0.2 (from scikit-image==0.21.0->-r requirements_versions.txt (line 25))
  Using cached lazy_loader-0.4-py3-none-any.whl.metadata (7.6 kB)
Collecting trampoline>=0.1.2 (from torchsde==0.2.6->-r requirements_versions.txt (line 30))
  Using cached trampoline-0.1.2-py3-none-any.whl.metadata (10 kB)
Requirement already satisfied: filelock in ./venv/lib/python3.10/site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 31)) (3.14.0)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.30.2->-r requirements_versions.txt (line 31))
  Using cached tokenizers-0.13.3-cp310-cp310-macosx_10_11_x86_64.whl.metadata (6.7 kB)
Requirement already satisfied: idna in ./venv/lib/python3.10/site-packages (from httpx==0.24.1->-r requirements_versions.txt (line 32)) (3.7)
Collecting exceptiongroup (from anyio==3.*->httpcore==0.15->-r requirements_versions.txt (line 12))
  Using cached exceptiongroup-1.2.1-py3-none-any.whl.metadata (6.6 kB)
Requirement already satisfied: sympy in ./venv/lib/python3.10/site-packages (from torch->-r requirements_versions.txt (line 28)) (1.12.1)
Collecting toolz (from altair<6.0,>=4.2.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached toolz-0.12.1-py3-none-any.whl.metadata (5.1 kB)
Collecting aiohttp!=4.0.0a0,!=4.0.0a1 (from fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22))
  Using cached aiohttp-3.9.5-cp310-cp310-macosx_10_9_x86_64.whl.metadata (7.5 kB)
Collecting smmap<6,>=3.0.1 (from gitdb<5,>=4.0.1->GitPython==3.1.32->-r requirements_versions.txt (line 2))
  Using cached smmap-5.0.1-py3-none-any.whl.metadata (4.3 kB)
Collecting attrs>=22.2.0 (from jsonschema->jsonmerge==1.8.0->-r requirements_versions.txt (line 14))
  Using cached attrs-23.2.0-py3-none-any.whl.metadata (9.5 kB)
Collecting jsonschema-specifications>=2023.03.6 (from jsonschema->jsonmerge==1.8.0->-r requirements_versions.txt (line 14))
  Using cached jsonschema_specifications-2023.12.1-py3-none-any.whl.metadata (3.0 kB)
Collecting referencing>=0.28.4 (from jsonschema->jsonmerge==1.8.0->-r requirements_versions.txt (line 14))
  Using cached referencing-0.35.1-py3-none-any.whl.metadata (2.8 kB)
Collecting rpds-py>=0.7.1 (from jsonschema->jsonmerge==1.8.0->-r requirements_versions.txt (line 14))
  Using cached rpds_py-0.18.1-cp310-cp310-macosx_10_12_x86_64.whl.metadata (4.1 kB)
Collecting contourpy>=1.0.1 (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached contourpy-1.2.1-cp310-cp310-macosx_10_9_x86_64.whl.metadata (5.8 kB)
Collecting cycler>=0.10 (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB)
Collecting fonttools>=4.22.0 (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached fonttools-4.53.0-cp310-cp310-macosx_10_9_universal2.whl.metadata (162 kB)
Collecting kiwisolver>=1.3.1 (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached kiwisolver-1.4.5-cp310-cp310-macosx_10_9_x86_64.whl.metadata (6.4 kB)
Collecting pyparsing>=2.3.1 (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached pyparsing-3.1.2-py3-none-any.whl.metadata (5.1 kB)
Collecting python-dateutil>=2.7 (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytz>=2020.1 (from pandas<3.0,>=1.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached pytz-2024.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas<3.0,>=1.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached tzdata-2024.1-py2.py3-none-any.whl.metadata (1.4 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in ./venv/lib/python3.10/site-packages (from requests->clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (3.3.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./venv/lib/python3.10/site-packages (from requests->clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (2.2.1)
Collecting click>=7.0 (from uvicorn>=0.14.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Requirement already satisfied: wcwidth<0.3.0,>=0.2.12 in ./venv/lib/python3.10/site-packages (from ftfy->open-clip-torch==2.20.0->-r requirements_versions.txt (line 19)) (0.2.13)
Collecting llvmlite<0.43,>=0.42.0dev0 (from numba->facexlib==0.3.0->-r requirements_versions.txt (line 9))
  Using cached llvmlite-0.42.0-cp310-cp310-macosx_10_9_x86_64.whl.metadata (4.8 kB)
Requirement already satisfied: mpmath<1.4.0,>=1.1.0 in ./venv/lib/python3.10/site-packages (from sympy->torch->-r requirements_versions.txt (line 28)) (1.3.0)
Collecting aiosignal>=1.1.2 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22))
  Using cached aiosignal-1.3.1-py3-none-any.whl.metadata (4.0 kB)
Collecting frozenlist>=1.1.1 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22))
  Using cached frozenlist-1.4.1-cp310-cp310-macosx_10_9_x86_64.whl.metadata (12 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22))
  Using cached multidict-6.0.5-cp310-cp310-macosx_10_9_x86_64.whl.metadata (4.2 kB)
Collecting yarl<2.0,>=1.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22))
  Using cached yarl-1.9.4-cp310-cp310-macosx_10_9_x86_64.whl.metadata (31 kB)
Collecting async-timeout<5.0,>=4.0 (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 22))
  Using cached async_timeout-4.0.3-py3-none-any.whl.metadata (4.2 kB)
Collecting six>=1.5 (from python-dateutil>=2.7->matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 11))
  Using cached six-1.16.0-py2.py3-none-any.whl.metadata (1.8 kB)
Using cached setuptools-69.5.1-py3-none-any.whl (894 kB)
Using cached GitPython-3.1.32-py3-none-any.whl (188 kB)
Using cached Pillow-9.5.0-cp310-cp310-macosx_10_10_x86_64.whl (3.4 MB)
Using cached accelerate-0.21.0-py3-none-any.whl (244 kB)
Using cached blendmodes-2022-py3-none-any.whl (10 kB)
Using cached clean_fid-0.1.35-py3-none-any.whl (26 kB)
Using cached diskcache-5.6.3-py3-none-any.whl (45 kB)
Using cached einops-0.4.1-py3-none-any.whl (28 kB)
Using cached facexlib-0.3.0-py3-none-any.whl (59 kB)
Using cached fastapi-0.94.0-py3-none-any.whl (56 kB)
Using cached gradio-3.41.2-py3-none-any.whl (20.1 MB)
Using cached httpcore-0.15.0-py3-none-any.whl (68 kB)
Using cached inflection-0.5.1-py2.py3-none-any.whl (9.5 kB)
Using cached kornia-0.6.7-py2.py3-none-any.whl (565 kB)
Using cached lark-1.1.2-py2.py3-none-any.whl (104 kB)
Using cached numpy-1.26.2-cp310-cp310-macosx_10_9_x86_64.whl (20.6 MB)
Using cached omegaconf-2.2.3-py3-none-any.whl (79 kB)
Using cached open_clip_torch-2.20.0-py3-none-any.whl (1.5 MB)
Using cached piexif-1.1.3-py2.py3-none-any.whl (20 kB)
Using cached psutil-5.9.5-cp36-abi3-macosx_10_9_x86_64.whl (245 kB)
Using cached pytorch_lightning-1.9.4-py3-none-any.whl (827 kB)
Using cached resize_right-0.0.2-py3-none-any.whl (8.9 kB)
Using cached safetensors-0.4.2-cp310-cp310-macosx_10_12_x86_64.whl (426 kB)
Using cached scikit_image-0.21.0-cp310-cp310-macosx_10_9_x86_64.whl (13.0 MB)
Using cached spandrel-0.1.6-py3-none-any.whl (278 kB)
Using cached tomesd-0.1.3-py3-none-any.whl (11 kB)
Using cached torchdiffeq-0.2.3-py3-none-any.whl (31 kB)
Using cached torchsde-0.2.6-py3-none-any.whl (61 kB)
Using cached transformers-4.30.2-py3-none-any.whl (7.2 MB)
Using cached httpx-0.24.1-py3-none-any.whl (75 kB)
Using cached pillow_avif_plugin-1.4.3-cp310-cp310-macosx_10_10_x86_64.whl (8.3 MB)
Using cached anyio-3.7.1-py3-none-any.whl (80 kB)
Using cached gradio_client-0.5.0-py3-none-any.whl (298 kB)
Using cached sniffio-1.3.1-py3-none-any.whl (10 kB)
Using cached aenum-3.1.15-py3-none-any.whl (137 kB)
Using cached aiofiles-23.2.1-py3-none-any.whl (15 kB)
Using cached altair-5.3.0-py3-none-any.whl (857 kB)
Using cached deprecation-2.1.0-py2.py3-none-any.whl (11 kB)
Using cached gitdb-4.0.11-py3-none-any.whl (62 kB)
Using cached h11-0.12.0-py3-none-any.whl (54 kB)
Using cached imageio-2.34.1-py3-none-any.whl (313 kB)
Using cached importlib_resources-6.4.0-py3-none-any.whl (38 kB)
Using cached jsonschema-4.22.0-py3-none-any.whl (88 kB)
Using cached lazy_loader-0.4-py3-none-any.whl (12 kB)
Using cached lightning_utilities-0.11.2-py3-none-any.whl (26 kB)
Using cached matplotlib-3.9.0-cp310-cp310-macosx_10_12_x86_64.whl (7.9 MB)
Using cached orjson-3.10.3-cp310-cp310-macosx_10_15_x86_64.macosx_11_0_arm64.macosx_10_15_universal2.whl (253 kB)
Using cached pandas-2.2.2-cp310-cp310-macosx_10_9_x86_64.whl (12.6 MB)
Using cached pydantic-1.10.15-cp310-cp310-macosx_10_9_x86_64.whl (2.9 MB)
Using cached pywavelets-1.6.0-cp310-cp310-macosx_10_9_x86_64.whl (4.4 MB)
Downloading scipy-1.13.1-cp310-cp310-macosx_10_9_x86_64.whl (39.3 MB)
                                            0.0/39.3 MB ? eta -:--:--

stderr: WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x10bbd4f70>, 'Connection to files.pythonhosted.org timed out. (connect timeout=15)')': /packages/33/59/41b2529908c002ade869623b87eecff3e11e3ce62e996d0bdcb536984187/scipy-1.13.1-cp310-cp310-macosx_10_9_x86_64.whl
ERROR: Exception:
Traceback (most recent call last):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/urllib3/response.py", line 438, in _error_catcher
    yield
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/urllib3/response.py", line 561, in read
    data = self._fp_read(amt) if not fp_closed else b""
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/urllib3/response.py", line 527, in _fp_read
    return self._fp.read(amt) if amt is not None else self._fp.read()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 98, in read
    data: bytes = self.__fp.read(amt)
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py", line 466, in read
    s = self.fp.read(amt)
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py", line 705, in readinto
    return self._sock.recv_into(b)
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py", line 1307, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py", line 1163, in read
    return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
    status = run_func(*args)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 245, in wrapper
    return func(self, options, args)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 377, in run
    requirement_set = resolver.resolve(
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 179, in resolve
    self.factory.preparer.prepare_linked_requirements_more(reqs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/operations/prepare.py", line 552, in prepare_linked_requirements_more
    self._complete_partial_requirements(
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/operations/prepare.py", line 467, in _complete_partial_requirements
    for link, (filepath, _) in batch_download:
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/network/download.py", line 183, in __call__
    for chunk in chunks:
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/cli/progress_bars.py", line 53, in _rich_progress_bar
    for chunk in iterable:
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_internal/network/utils.py", line 63, in response_chunks
    for chunk in response.raw.stream(
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/urllib3/response.py", line 622, in stream
    data = self.read(amt=amt, decode_content=decode_content)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/urllib3/response.py", line 560, in read
    with self._error_catcher():
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/pip/_vendor/urllib3/response.py", line 443, in _error_catcher
    raise ReadTimeoutError(self._pool, None, "Read timed out.")
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.

问题分析

看上去是pip安装依赖的时候files.pythonhosted.org访问不到,又不能开VPN(开了会出现另一个联网问题)。
那么只能更改pip源到国内镜像了。

问题解决

通过命令更换pip源到阿里云镜像:pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/

PS:也可以更换到以下国内源:
阿里云 http://mirrors.aliyun.com/pypi/simple/
豆瓣 http://pypi.douban.com/simple/
清华大学 https://pypi.tuna.tsinghua.edu.cn/simple/
中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/
中国科学技术大学 http://pypi.mirrors.ustc.edu.cn/simple/

安装Stable Diffusion时的联网问题

问题描述

安装stable-diffusion-webui,运行./webui.sh时候报错:

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on couldhll user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.3.9.4)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Installing torch and torchvision
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', TimeoutError('timed out'))': /simple/torch/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', TimeoutError('timed out'))': /simple/torch/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', TimeoutError('timed out'))': /simple/torch/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', TimeoutError('timed out'))': /simple/torch/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', TimeoutError('timed out'))': /simple/torch/
ERROR: Could not find a version that satisfies the requirement torch==2.1.0 (from versions: none)
ERROR: No matching distribution found for torch==2.1.0
Traceback (most recent call last):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/launch.py", line 48, in 
    main()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/launch.py", line 39, in main
    prepare_environment()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/launch_utils.py", line 380, in prepare_environment
    run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/launch_utils.py", line 115, in run
    raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "/Users/couldhll/Desktop/stable-diffusion-webui/venv/bin/python3.10" -m pip install torch==2.1.0 torchvision==0.16.0
Error code: 1

问题解决

关闭VPN后一切正常

《我的阿勒泰》

去爱,去生活,去受伤。

啥叫有用,李文秀?
生你下来是为了让你服务别人的?
你看看这个草原上的树啊、草啊。
有人吃有人用,便叫有用。
要是没有人用,它就这么待在草原上也很好嘛。
自由自在的嘛,是不是?

他们不一样。
他们有自己的生活方式,有自己跟这个世界相处的方式。
你可以不赞同他们,但是你不可以居高临下地改变他们。

再颠簸的生活,也要闪亮地过。

可是传统,不是一直都是那样的。
你非要固守传统,非要走仙女湾小道,还非要做猎人。
但是一百年前,没有仙女湾小道,两百年前也没有猎枪。
所有的传统还有文明都是世界变革中,人类一点点摸索出来的。
没有什么是一成不变的,只有一直变化才是不变的。
现在这个时代又是一个变化。
适应新的时代,调整生活才是正经嘛。
固守旧的传统不见得都是对的。

秀,你知道那仁夏牧场这么好,为什么牧民还要不停地转场?
因为他们要给夏牧场时间,让它休息,好让牧场里的水呀、草啊重新开始恢复丰茂。
这样子来年的时候呢,牧民们才能够带着牛羊更好的回来。
给他点时间。