Tags: niceqwer55555/llama-cpp-python
Tags
更新 build-wheels-cu128-win.yml
更新 build-wheels-cu128-win.yml
更新 build-wheels-cu128-win.yml
Update build-wheels-cu128-win.yml
Update llama.cpp 20251115 and Move the ggml-related code to _ggml.py.
Use httplib to download model from an URL when the libcurl is disabled Note: LLAMA_HTTPLIB is OFF, cannot build llama-server. Hint: to skip building server, set -DLLAMA_BUILD_SERVER=OFF
Use httplib to download model from an URL when the libcurl is disabled Note: LLAMA_HTTPLIB is OFF, cannot build llama-server. Hint: to skip building server, set -DLLAMA_BUILD_SERVER=OFF
Use httplib to download model from an URL when the libcurl is disabled Note: LLAMA_HTTPLIB is OFF, cannot build llama-server. Hint: to skip building server, set -DLLAMA_BUILD_SERVER=OFF
Use httplib to download model from an URL when the libcurl is disabled Note: LLAMA_HTTPLIB is OFF, cannot build llama-server. Hint: to skip building server, set -DLLAMA_BUILD_SERVER=OFF
Use httplib to download model from an URL when the libcurl is disabled Note: LLAMA_HTTPLIB is OFF, cannot build llama-server. Hint: to skip building server, set -DLLAMA_BUILD_SERVER=OFF
PreviousNext