这本书真的很跨界
图 4-3 不同构建工具的原理对比图
图 17-2 CSS动画与JavaScript动画的对比
图 25-1 依赖注入相关的术语及其关系
那是1959年,即教科书上所说的“三年困难时期”。和全国一样,大学的饭菜非常寡淡,一些口述史中曾经这样描述学生食堂:“早上洪湖水(可以见底的粥),晚上浪打浪(菜汤),中午小二黑(两个红薯面窝头)。”
按照美食家的说法,美食分三个层次:首先是温饱之需,其次是口舌之欲,最后是慰藉心灵。
回得去的叫家乡,回不去的才叫故乡。
2001年底,我开始筹划美食节目,沈公得知,在饭桌上对我说:“喜欢吃,是一件美好的事情。只要是和人类本能相关的事情,都不应该是羞耻的。”
后来,陈立老师给过我提醒,他说:“人的阅读过程,实际上是个摄取的过程;思考过程,才是一个消化过程。”用陈立老师的话说:“最好的状态是有限地阅读、有限地听闻和无限地思考。”
中国餐饮业的消费主力大致的分层是:温饱型、美味型和审美型。
广厦万间,夜眠只需六尺;黄金万两,一日不过三餐。
《小程序注册指引》
《小程序开发起步》
《小程序之Hello World》
《做一个名片小程序》
《名片小程序跨终端升级》
在WebUI上选择从网上下载的*.safetensors新模型后,出现如下报错:
Applying attention optimization: sub-quadratic... done.
Model loaded in 16.2s (load weights from disk: 0.5s, create model: 0.9s, apply weights to model: 13.4s, apply dtype to VAE: 0.2s, move model to device: 0.1s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.8s).
Reusing loaded model dreamshaper_8.safetensors [879db523c3] to load AnythingXL_xl.safetensors [8421598e93]
Loading weights [8421598e93] from /Users/couldhll/Desktop/stable-diffusion-webui/models/Stable-diffusion/AnythingXL_xl.safetensors
Creating model from config: /Users/couldhll/Desktop/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
changing setting sd_model_checkpoint to AnythingXL_xl.safetensors [8421598e93]: RuntimeError
Traceback (most recent call last):
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/options.py", line 165, in set
option.onchange()
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 879, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 751, in load_model
send_model_to_device(sd_model)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 681, in send_model_to_device
m.to(shared.device)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 833, in _apply
param_applied = fn(param)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: MPS backend out of memory (MPS allocated: 6.51 GB, other allocations: 243.82 MB, max allowed: 6.80 GB). Tried to allocate 120.62 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
*** Error completing request
*** Arguments: ('task(g4jhu75lk3xb84y)', <gradio.routes.Request object at 0x169974070>, 'a muscular female,muscles in the legs, bra,T-back,brown skin, short hair', '', [], 1, 1, 9, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/processing.py", line 832, in process_images
sd_models.reload_model_weights()
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'
运行webui时候带参数lowvram:./webui.sh --lowvram
或者
在webui-user.sh中找到#export COMMANDLINE_ARGS=""
行改为:export COMMANDLINE_ARGS="--lowvram --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"
PS:因为本机系统是macos,所以需要带有macos特有的参数:--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
是从webui-macos-env.sh文件中(export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"
)复制过来的
在WebUI上选择从网上下载的*.safetensors新模型后,出现如下报错:
Creating model from config: /Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/configs/stable-diffusion/v2-inference-v.yaml
Applying attention optimization: sub-quadratic... done.
Model loaded in 35.1s (load weights from disk: 10.0s, find config: 11.5s, create model: 0.2s, apply weights to model: 9.0s, apply half(): 2.0s, move model to device: 0.2s, calculate empty prompt: 2.0s).
Reusing loaded model v2-1_768-ema-pruned.ckpt [ad2a33c361] to load dreamshaper_8.safetensors [879db523c3]
Loading weights [879db523c3] from /Users/couldhll/Desktop/stable-diffusion-webui/models/Stable-diffusion/dreamshaper_8.safetensors
Creating model from config: /Users/couldhll/Desktop/stable-diffusion-webui/configs/v1-inference.yaml
/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
creating model quickly: OSError
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/ui_settings.py", line 316, in <lambda>
fn=lambda value, k=k: self.run_settings_single(value, key=k),
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/ui_settings.py", line 95, in run_settings_single
if value is None or not opts.set(key, value):
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/options.py", line 165, in set
option.onchange()
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 879, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
Failed to create model quickly; will retry using slow method.
changing setting sd_model_checkpoint to dreamshaper_8.safetensors [879db523c3]: OSError
Traceback (most recent call last):
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/options.py", line 165, in set
option.onchange()
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 879, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "/Users/couldhll/Desktop/stable-diffusion-webui/modules/sd_models.py", line 732, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/Users/couldhll/Desktop/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/Users/couldhll/Desktop/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
stable-diffusion-webui/openai/clip-vit-large-patch14目录下没有clip-vit-large-patch14
mkdir openai
cd openai
git clone https://huggingface.co/openai/clip-vit-large-patch14.git
PS:使用git clone的时候遇到Couldn’t connect to server问题
下载openai/clip-vit-large-patch14,使用命令git clone https://huggingface.co/openai/clip-vit-large-patch14
时,报如下错误:
Cloning into 'clip-vit-large-patch14'...
fatal: unable to access 'https://huggingface.co/openai/clip-vit-large-patch14/': Failed to connect to huggingface.co port 443 after 75003 ms: Couldn't connect to server
huggingface.co需要科学上网,使用本机代理http://127.0.0.1:8118
,运行命令解决:
git clone https://huggingface.co/openai/clip-vit-large-patch14.git -c http.proxy="http://127.0.0.1:8118"