File tree Expand file tree Collapse file tree 1 file changed +2
-2
lines changed
Expand file tree Collapse file tree 1 file changed +2
-2
lines changed Original file line number Diff line number Diff line change @@ -44,10 +44,10 @@ You'll first need to download one of the available multi-modal models in GGUF fo
4444- [ llava1.5 7b] ( https://huggingface.co/mys/ggml_llava-v1.5-7b )
4545- [ llava1.5 13b] ( https://huggingface.co/mys/ggml_llava-v1.5-13b )
4646
47- Then when you run the server you'll need to also specify the path to the clip model used for image embedding
47+ Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the ` llava-1-5 ` chat_format
4848
4949``` bash
50- python3 -m llama_cpp.server --model < model_path> --clip-model-path < clip_model_path>
50+ python3 -m llama_cpp.server --model < model_path> --clip-model-path < clip_model_path> --chat-format llava-1-5
5151```
5252
5353Then you can just use the OpenAI API as normal
You can’t perform that action at this time.
0 commit comments