This repository contains a collection of Jupyter Notebooks demonstrating how to use TimmWrapper from the 🤗 transformers library to integrate timm models effortlessly. The notebooks cover a wide range of use cases, including pipelines, quantization, fine-tuning, LoRA, and model compilation.
| Notebook | Description |
|---|---|
#01_pipelines.ipynb |
Use pipeline API with any timm model. |
#02_auto_class.ipynb |
Explore Auto Classes to load timm models easily. |
#03_quantization.ipynb |
Apply 8-bit quantization for efficient inference. |
#04_sft.ipynb |
Perform supervised fine-tuning with the Trainer API. |
#05_sft_lora.ipynb |
Fine-tune using LoRA adapters for efficient training. |
#06_use_lora.ipynb |
Load and use LoRA fine-tuned models for inference. |
#07_compile.ipynb |
Speed up inference using torch.compile. |
-
Clone the repository:
git clone https://github.com/your-username/timmwrapper-examples.git cd timmwrapper-examples -
Install required packages:
pip install -U transformers timm gradio bitsandbytes
-
Run the notebooks:
jupyter notebook
- ✅ Using
TimmWrapperwith the pipeline API for image classification. - ✅ Loading models with Auto Classes.
- ✅ Applying 8-bit quantization with
bitsandbytes. - ✅ Fine-tuning using the Trainer API.
- ✅ Efficient training with LoRA adapters.
- ✅ Speeding up inference with
torch.compile.
| Model | Description |
|---|---|
| vit_base_patch16_224_food101 | Fine-tuned on Food101. |
| vit_base_patch16_224_lora_food101 | LoRA fine-tuned version. |
Feel free to open issues or submit pull requests if you have suggestions or improvements!
Note: This README was created with the help of ChatGPT 4.