none.gif

MINEPIG

一个文本生成本地部署的问题

有大佬知道这个该怎么解决吗?
Traceback (most recent call last): File “/Users/pmp/AI/Text_Generation_WebUI/oobabooga_macos/text-generation-webui/server.py”, line 68, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name, loader) File “/Users/pmp/AI/Text_Generation_WebUI/oobabooga_macos/text-generation-webui/modules/models.py”, line 78, in load_model output = load_func_maploader File “/Users/pmp/AI/Text_Generation_WebUI/oobabooga_macos/text-generation-webui/modules/models.py”, line 241, in llamacpp_loader model, tokenizer = LlamaCppModel.from_pretrained(model_file) File “/Users/pmp/AI/Text_Generation_WebUI/oobabooga_macos/text-generation-webui/modules/llamacpp_model.py”, line 60, in from_pretrained result.model = Llama(**params) File “/Users/pmp/opt/anaconda3/envs/textgen/lib/python3.10/site-packages/llama_cpp/llama.py”, line 313, in init assert self.model is not None AssertionError
设备:M2 Max
内存:96G(M2中可以当作显存用)
模型:llama-2-13b-chat.ggmlv3.q4_K_M
https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/tree/main
loader: llama.cpp
https://github.com/ggerganov/llama.cpp
使用:oobabooga/text-generation-webui
https://github.com/oobabooga/text-generation-webui

在加载模型时出现问题。经检查model_path正确,但self.model未能正确加载。
同一配置下模型换为
llama-2-70b-chat.ggmlv3.q4_K_M
https://huggingface.co/TheBloke/Llama-2-70B-Chat-GGML
后可以正确加载并进行对话(虽然很慢就是了)

求解决,感谢大佬相助

在q群里机器人回答了说可能是模型本身的问题。但是我下了另一个llama2-13b-chat-ggml模型同样报了这个错误。

none.gif

zclss

不知道啊大佬