Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Ednaordinary
left a comment
There was a problem hiding this comment.
Noticed a few things this PR would be helpful to change
| prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k." | ||
|
|
||
| with torch.autocast("cuda", torch.bfloat16, cache_enabled=False): | ||
| frames = pipe(prompt, num_frames=84).frames[0] |
There was a problem hiding this comment.
num_frames=84 should be num_frames=85, (14 * 6 + 1) like mentioned here
| @@ -25,6 +25,50 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.m | |||
|
|
|||
There was a problem hiding this comment.
(One line above this) Only FlowMatchEulerDiscreteScheduler has invert_sigmas, so anything else wouldn't work as of now as I understand it
| pipe.enable_vae_tiling() | ||
|
|
||
| prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k." | ||
| frames = pipe(prompt, num_frames=84).frames[0] |
There was a problem hiding this comment.
same thing, num_frames=85
There was a problem hiding this comment.
Comments from @Ednaordinary are already great, so let's resolve them. Maybe we could add a section on how to reproduce some of their videos generated with the original inference code and params? I think most people would be interested in that.
Additionally, it seems like we should suggest using a maximum sequence length of 256?
#9769 (comment)
Already the case:
|
@DN6 is this ready to be merged? Cc: @a-r-r-o-w as well |
sayakpaul
left a comment
There was a problem hiding this comment.
LGTM! Thanks, Dhruv. I know getting to this point hasn't been the easiest experience. Salute 🫡
| @@ -25,6 +25,50 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.m | |||
|
|
|||
| <Tip> | ||
| Decoding the latents in full precision is very memory intensive. You will need at least 70GB VRAM to generate the 163 frames | ||
| in this example. To reduce memory, either reduce the number of frames or run the decoding step in `torch.bfloat16` | ||
| </Tip> |
There was a problem hiding this comment.
Even if we use enable_model_cpu_offload(), we would need 70GBs?
| <Tip> | |
| Decoding the latents in full precision is very memory intensive. You will need at least 70GB VRAM to generate the 163 frames | |
| in this example. To reduce memory, either reduce the number of frames or run the decoding step in `torch.bfloat16` | |
| </Tip> | |
| <Tip> | |
| Decoding the latents in full precision is very memory intensive. You will need at least 70GB VRAM to generate the 163 frames | |
| in this example. To reduce memory, either reduce the number of frames or run the decoding step in `torch.bfloat16`. | |
| </Tip> |
* update * update * update * update * update --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* update * update * update * update * update --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
What does this PR do?
Update Mochi docs
Fixes # (issue)
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.