Skip to content

Releases: fkleon/stable-diffusion.cpp

master-473-9565c7f

18 Jan 04:16
9565c7f

Choose a tag to compare

add support for flux2 klein (#1193)

* add support for flux2 klein 4b

* add support for flux2 klein 8b

* use attention_mask in Flux.2 klein LLMEmbedder

* update docs

master-450-a2d83dd

28 Dec 07:56
a2d83dd

Choose a tag to compare

refactor: move pmid condition logic into get_pmid_condition (#1148)

master-391-5865b5e

04 Dec 10:02
5865b5e

Choose a tag to compare

refactor: split SDParams to SDCliParams/SDContextParams/SDGenerationP…

master-377-2034588

24 Nov 07:25
2034588

Choose a tag to compare

refactor: optimize the handling of sample method (#999)

master-368-28ffb6c

18 Nov 09:15
28ffb6c

Choose a tag to compare

fix: resolve issue with concat multiple LoRA output diffs at runtime …

master-362-742a733

16 Nov 09:10
742a733

Choose a tag to compare

feat: add cpu rng (#977)

master-348-8f6c5c2

06 Nov 11:28
8f6c5c2

Choose a tag to compare

refactor: simplify the model loading logic (#933)

* remove String2GGMLType

* remove preprocess_tensor

* fix clip init

* simplify the logic for reading weights

master-347-6103d86

03 Nov 10:23
6103d86

Choose a tag to compare

refactor: introduce GGMLRunnerContext (#928)

* introduce GGMLRunnerContext

* add Flash Attention enable control through GGMLRunnerContext

* add conv2d_direct enable control through GGMLRunnerContext

master-343-dd75fc0

28 Oct 21:50
dd75fc0

Choose a tag to compare

refactor: unify the naming style of ggml extension functions (#921)

master-340-9e28be6

26 Oct 07:47
9e28be6

Choose a tag to compare

feat: add chroma radiance support (#910)

* add chroma radiance support

* fix ci

* simply generate_init_latent

* workaround: avoid ggml cuda error

* format code

* add chroma radiance doc