[lora] adapt new LoRA config injection method#11999
Conversation
setup.py
Outdated
| "numpy", | ||
| "parameterized", | ||
| "peft>=0.15.0", | ||
| # "peft>=0.16.1", |
There was a problem hiding this comment.
To be changed when peft has the release.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
Now that PEFT 0.17.0 is out, we should be good to review this PR. Gentle ping @BenjaminBossan |
BenjaminBossan
left a comment
There was a problem hiding this comment.
Looks good from my side, thanks.
|
@DN6 could you give this a check please? This solves some existing issues around complicated logic of deriving some |
DN6
left a comment
There was a problem hiding this comment.
LGTM 👍🏽 Just need tests to pass.
|
Failing test is unrelated. |
|
Still not 100% layers loaded |
|
inference with lora suddenly significantly slower than with diffusers==0.33.1 and peft==0.14.0 |
What does this PR do?
Fixes #11874
Relies on huggingface/peft#2637
Supercedes #11911
TODO: Run some integration tests before merging.
Edit: Have run the integration tests for SDXL and Flux and they work.