Stable Diffusion使用新模型时报缺失clip-vit-large-patch14

问题描述

在WebUI上选择从网上下载的*.safetensors新模型后,出现如下报错:

Creating model from config: /Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/configs/stable-diffusion/v2-inference-v.yaml
Applying attention optimization: sub-quadratic... done.
Model loaded in 35.1s (load weights from disk: 10.0s, find config: 11.5s, create model: 0.2s, apply weights to model: 9.0s, apply half(): 2.0s, move model to device: 0.2s, calculate empty prompt: 2.0s).
Reusing loaded model v2-1_768-ema-pruned.ckpt [ad2a33c361] to load dreamshaper_8.safetensors [879db523c3]
Loading weights [879db523c3] from /Users/couldhll/Desktop/stable-diffusion-webui/models/Stable-diffusion/dreamshaper_8.safetensors
Creating model from config: /Users/couldhll/Desktop/stable-diffusion-webui/configs/v1-inference.yaml
/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
creating model quickly: OSError
Traceback (most recent call last):
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/ui_settings.py", line 316, in <lambda>
    fn=lambda value, k=k: self.run_settings_single(value, key=k),
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/ui_settings.py", line 95, in run_settings_single
    if value is None or not opts.set(key, value):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 879, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__
    self.tokenizer = CLIPTokenizer.from_pretrained(version)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Failed to create model quickly; will retry using slow method.
changing setting sd_model_checkpoint to dreamshaper_8.safetensors [879db523c3]: OSError
Traceback (most recent call last):
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 879, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 732, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__
    self.tokenizer = CLIPTokenizer.from_pretrained(version)
  File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

问题分析

stable-diffusion-webui/openai/clip-vit-large-patch14目录下没有clip-vit-large-patch14

问题解决

mkdir openai
cd openai
git clone https://huggingface.co/openai/clip-vit-large-patch14.git

PS:使用git clone的时候遇到Couldn’t connect to server问题

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注