Replies: 7 comments
-
What particular feature in Flux from |
Beta Was this translation helpful? Give feedback.
-
|
I'm not aware of a method provided by diffusers which would allow generation of the text embeddings (or image embeddings) for a given model. If there were, that would be good enough I suppose as pipelines typically support having embeddings as an input as far as I can tell. So the rest can be worked out. To put it differently, it's nice that the pipeline method takes care of all three steps: embed, transform and vae. But it also hides the intermediate states and if you want to do something with them you have to essentially do the whole pipeline yourself. |
Beta Was this translation helpful? Give feedback.
-
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Beta Was this translation helpful? Give feedback.
-
|
do you want to take a look at this custom block? |
Beta Was this translation helpful? Give feedback.
-
|
Thank you, it does look relevant and like it is configurable. Will give it a try. |
Beta Was this translation helpful? Give feedback.
-
|
If you want to cache the same repeated prompts and args you can do something like this. To cache the encode prompt part. It does not spit the pipeline though which is a bit tickier as allot happens in the main call block, but will cache on repeated calls. from collections import OrderedDict
from functools import wraps
from common.logger import logger
# Global cache for prompt embeddings
# Key is (pipeline_type, args, kwargs)
GLOBAL_PROMPT_CACHE = OrderedDict()
MAX_PROMPT_CACHE_SIZE = 64
def clear_global_prompt_cache():
"""Clear the global prompt embeddings cache."""
GLOBAL_PROMPT_CACHE.clear()
logger.debug("Global prompt cache cleared")
def enable_prompt_caching(pipeline):
"""
Generic wrapper to cache the results of encode_prompt on any diffusers pipeline.
Uses a global cache shared across pipeline instances to avoid redundant text encoding
even when pipelines are reloaded.
"""
if not hasattr(pipeline, "encode_prompt"):
logger.warning("Pipeline does not have encode_prompt method; cannot enable prompt caching")
return pipeline
if hasattr(pipeline, "_prompt_cache_enabled"):
return pipeline # Already enabled
original_encode_prompt = pipeline.encode_prompt
pipeline_identity = pipeline.__class__.__name__
def make_hashable(obj):
if isinstance(obj, (list, tuple)):
return tuple(make_hashable(i) for i in obj)
if isinstance(obj, dict):
return tuple(sorted((k, make_hashable(v)) for k, v in obj.items()))
return obj
@wraps(original_encode_prompt)
def wrapped_encode_prompt(*args, **kwargs):
try:
# Create a cache key from identity and hashable representation of all arguments
# Identity ensures we don't use Flux embeddings for a Wan model, etc.
cache_key = (pipeline_identity, make_hashable(args), make_hashable(kwargs))
except (TypeError, ValueError):
# Fallback: if something isn't hashable, just compute normally
return original_encode_prompt(*args, **kwargs)
if cache_key in GLOBAL_PROMPT_CACHE:
# Move to end (Most Recently Used)
GLOBAL_PROMPT_CACHE.move_to_end(cache_key)
logger.info("Using cached prompt embeddings")
return GLOBAL_PROMPT_CACHE[cache_key]
# Compute new results (e.g., prompt_embeds, negative_prompt_embeds)
result = original_encode_prompt(*args, **kwargs)
# Store in global cache
GLOBAL_PROMPT_CACHE[cache_key] = result
if len(GLOBAL_PROMPT_CACHE) > MAX_PROMPT_CACHE_SIZE:
GLOBAL_PROMPT_CACHE.popitem(last=False) # Remove Least Recently Used
return result
# Monkey patch the instance method
pipeline.encode_prompt = wrapped_encode_prompt
pipeline._prompt_cache_enabled = True
return pipeline |
Beta Was this translation helpful? Give feedback.
-
|
Hi @innokean, is this issue resolved then? or do you need something else? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is your feature request related to a problem? Please describe.
Text to image models require generation of text embeddings as a first step. While this step is relatively quick if VRAM amount is not a constraint but it often is and therefore the text model(s) need to loaded, unload, then transformer loaded which leads to significant overhead and slowdown. If a user want to generate multiple images using the same prompt and different seeds, this process has to happen over and over (increasing batch size is usually not an option).
Describe the solution you'd like.
We can have an option to cache embedding either to RAM or to disk where the hash of the text is a key and the vectory embedding is a value. When a disk is used, the values will have to be in a folder appropriate for the model.
When the caching is enabled, the diffusers pipeline would check the cache first and if there is a hit, will use it.
If there is no hit, the embedding is generated and saved to the cache.
Describe alternatives you've considered.
I have a custom code which makes use embedding geration as a separate step
https://github.com/xhinker/sd_embed/blob/main/src/sd_embed/embedding_funcs.py
and then the embeddings are fed into the pipeline in the 2nd step.
This is effectively a solution popularised by @sayakpaul except I cache the embeddings.
This method works upto Flux.1 but new models appear e.g. Qwen Image for which there is currently no support.
Additional context.
Adding this should not break anything since the cache can be always disabled (the default) and wiped out.
Beta Was this translation helpful? Give feedback.
All reactions