-
Notifications
You must be signed in to change notification settings - Fork 161
Closed
Labels
good first issueGood for newcomersGood for newcomersmodellingrelated to CoreML/Transformersrelated to CoreML/Transformers
Description
I converted the models to float32 using this script: https://gist.github.com/pcuenca/23cd08443460bc90854e2a6f0f575084, but found precision problems when targeting float16. It'd be interesting to see what the performance is for float16, but we need to determine what layers/ops need to be kept in float32. Anyone interested please let us know and we can work on it or test together :)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
good first issueGood for newcomersGood for newcomersmodellingrelated to CoreML/Transformersrelated to CoreML/Transformers