Conversation
|
The documentation is not available anymore as the PR was closed or merged. |
| if isinstance(self.scheduler, LmsDiscreteScheduler): | ||
| latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs)["prev_sample"] | ||
| else: | ||
| latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"] |
There was a problem hiding this comment.
This can be made cleaner by removing the branch with t->i, but for now I'm not touching the timesteps from k-lms for ease of debugging.
| if isinstance(self.scheduler, LmsDiscreteScheduler): | ||
| sigma = self.scheduler.sigmas[i] | ||
| latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5) |
There was a problem hiding this comment.
We'll probably need a mechanism to specify custom coefficients for inputs in the future, since the Karras scheduler needs it too.
patrickvonplaten
left a comment
There was a problem hiding this comment.
Just wondering about the i vs. sigma API since in karras_ve we pass a sigma:
| # compute the previous noisy sample x_t -> x_t-1 | ||
| latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"] | ||
| if isinstance(self.scheduler, LmsDiscreteScheduler): | ||
| latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs)["prev_sample"] |
There was a problem hiding this comment.
shouldn't we maybe pass the sigma here?
There was a problem hiding this comment.
Ok we need it anyway - good to leave as is for me!
There was a problem hiding this comment.
The i is needed inside the step for other calculations too, and we can't reverse the sigma into i, so we'll need it like this until a more full refactor.
|
prompt = ["a logo of Knicks and championship "] height = 512 # default height of Stable Diffusion num_inference_steps = 100 # Number of denoising steps guidance_scale = 7.5 # Scale for classifier-free guidance generator = torch.manual_seed(32) # Seed generator to create the inital latent noise batch_size = 1 |
* test LMS with LDM * test LMS with LDM * Interchangeable sigma and timestep. Added dummy objects * Debug * cuda generator * Fix derivatives * Update tests * Rename Lms->LMS
This PR adds the K-LMS sampler from k-diffusion by Katherine Crowson.
At the moment it only supports discrete beta-schedules (specifically the one for Stable Diffusion) but it will be extended to support continuous sigma-schedules in a follow-up PR.