LocalAI version:
LocalAI v4.2.4 (42a8db3)
Environment, CPU architecture, OS, and Version:
RTX 5090 32GB
Describe the bug
When trying to install predefined models like qwen-image or flux.1-dev nothing is getting downloaded.
LocalAI just downloads the .yaml specification, but no .safetensors files.
To Reproduce
- Go to
Install Models -> search for qwen-image -> click Install.... model will be installed in 10 seconds, without downloading actual models files.
Expected behavior
Model files should be downloaded
Logs
First try to install qwen-image from gallery
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG API job submitted to install galleryID="qwen-image" caller={caller.file="/build/core/http/routes/ui_api.go" caller.L=715 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 INFO HTTP request method="POST" path="/api/models/install/qwen-image" status=200 caller={caller.file="/build/core/http/app.go" caller.L=204 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG HTTP request method="GET" path="/api/resources" status=200 caller={caller.file="/build/core/http/app.go" caller.L=202 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG Written config file file="/models/qwen-image.yaml" caller={caller.file="/build/core/gallery/models.go" caller.L=319 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG Written gallery file file="/models/._gallery_qwen-image.yaml" caller={caller.file="/build/core/gallery/models.go" caller.L=329 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG Installed model model="qwen-image" caller={caller.file="/build/core/gallery/models.go" caller.L=136 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG Installing backend backend="diffusers" caller={caller.file="/build/core/gallery/models.go" caller.L=138 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG No system backends found caller={caller.file="/build/core/gallery/backends.go" caller.L=506 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG [inference_defaults] applying defaults for model modelID="embeddinggemma-300m" family=map[min_p:0 repeat_penalty:1 temperature:1 top_k:64 top_p:0.95] caller={caller.file="/build/core/config/inference_defaults.go" caller.L=90 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG [gguf] guessDefaultsFromFile: NGPULayers set NGPULayers=0x1671442f77a8 modelName="Embeddinggemma 300m Qat Q8_0 Unquantized" caller={caller.file="/build/core/config/gguf.go" caller.L=48 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 DEBUG [gguf] Model file loaded file="embeddinggemma-300m-qat-Q8_0.gguf" eosTokenID=1 bosTokenID=2 modelName="Embeddinggemma 300m Qat Q8_0 Unquantized" architecture="gemma-embedding" caller={caller.file="/build/core/config/gguf.go" caller.L=66 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 ERROR llamaCppDefaults: panic while parsing gguf file caller={caller.file="/build/core/config/hooks_llamacpp.go" caller.L=30 }
05/14/2026, 02:20:33 PM local-ai STDOUT
May 14 12:20:33 INFO Preloading models path="//models" caller={caller.file="/build/core/config/model_config_loader.go" caller.L=306 }
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: z-image-diffusers
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: embeddinggemma-300m
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: flux.2-dev
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: jina-reranker-v1-tiny-en
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: qwen-image-edit-2509
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: qwen-image
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: Z-Image-Turbo
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: flux.1-dev-ggml-q8_0
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM local-ai STDOUT
Model name: vllm-omni-z-image-turbo
05/14/2026, 02:20:33 PM STDOUT
local-ai |
05/14/2026, 02:20:33 PM STDOUT
local-ai |
Second try to install qwen-image from gallery
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 DEBUG API job submitted to install galleryID="qwen-image" caller={caller.file="/build/core/http/routes/ui_api.go" caller.L=715 }
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 INFO HTTP request method="POST" path="/api/models/install/qwen-image" status=200 caller={caller.file="/build/core/http/app.go" caller.L=204 }
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 DEBUG Written config file file="/models/qwen-image.yaml" caller={caller.file="/build/core/gallery/models.go" caller.L=319 }
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 DEBUG Written gallery file file="/models/._gallery_qwen-image.yaml" caller={caller.file="/build/core/gallery/models.go" caller.L=329 }
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 DEBUG Installed model model="qwen-image" caller={caller.file="/build/core/gallery/models.go" caller.L=136 }
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 DEBUG Installing backend backend="diffusers" caller={caller.file="/build/core/gallery/models.go" caller.L=138 }
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 DEBUG No system backends found caller={caller.file="/build/core/gallery/backends.go" caller.L=506 }
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 ERROR llamaCppDefaults: panic while parsing gguf file caller={caller.file="/build/core/config/hooks_llamacpp.go" caller.L=30 }
05/14/2026, 02:23:48 PM local-ai STDOUT
May 14 12:23:48 INFO Preloading models path="//models" caller={caller.file="/build/core/config/model_config_loader.go" caller.L=306 }
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM local-ai STDOUT
Model name: jina-reranker-v1-tiny-en
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM local-ai STDOUT
Model name: Z-Image-Turbo
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM local-ai STDOUT
Model name: flux.1-dev-ggml-q8_0
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM local-ai STDOUT
Model name: vllm-omni-z-image-turbo
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM local-ai STDOUT
Model name: z-image-diffusers
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM local-ai STDOUT
Model name: qwen-image
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM local-ai STDOUT
Model name: flux.2-dev
05/14/2026, 02:23:48 PM STDOUT
local-ai |
05/14/2026, 02:23:48 PM STDOUT
local-ai |
Additional context
LocalAI version:
LocalAI v4.2.4 (42a8db3)
Environment, CPU architecture, OS, and Version:
RTX 5090 32GB
Describe the bug
When trying to install predefined models like
qwen-imageorflux.1-devnothing is getting downloaded.LocalAI just downloads the
.yamlspecification, but no.safetensorsfiles.To Reproduce
Install Models-> search forqwen-image-> clickInstall.... model will be installed in 10 seconds, without downloading actual models files.Expected behavior
Model files should be downloaded
Logs
First try to install
qwen-imagefrom gallerySecond try to install
qwen-imagefrom galleryAdditional context