Unsloth Studio Installation
Learn how to install Unsloth Studio on your local device.
Unsloth Studio works on Windows, Linux, WSL and MacOS. You should use the same installation process on every device, although the system requirements may differ by device.
WindowsMacOSLinux & WSLDockerDeveloper Install
Mac: Like CPU - Chat + Data Recipes works for now. MLX training coming very soon.
CPU: Unsloth still works without a GPU, but for Chat + Data Recipes.
Training: Works on NVIDIA: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc. + Intel GPUs
Coming soon: Support for Apple MLX and AMD.
Install Instructions
Remember install instructions are the same across every device:
Install Unsloth
MacOS, Linux, WSL:
curl -fsSL https://unsloth.ai/install.sh | shWindows PowerShell:
irm https://unsloth.ai/install.ps1 | iexFirst install should now be 6x faster and with 50% reduced size due to precompiled llama.cpp binaries.
WSL users: you will be prompted for your sudo password to install build dependencies (cmake, git, libcurl4-openssl-dev).
Start training and running
Start fine-tuning and building datasets immediately after launching. See our step-by-step guide to get started with Unsloth Studio:
Get StartedUpdate Unsloth Studio:
To update Unsloth Studio use:
unsloth studio update If that does not work, you can use the below:
MacOS, Linux, WSL:
curl -fsSL https://unsloth.ai/install.sh | shWindows PowerShell:
System Requirements
Windows
Unsloth Studio works directly on Windows without WSL. To train models, make sure your system satisfies these requirements:
Requirements
Windows 10 or Windows 11 (64-bit)
NVIDIA GPU with drivers installed
App Installer (includes
winget): hereGit:
winget install --id Git.Git -e --source wingetPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
MacOS
Unsloth Studio works on Mac devices for Chat for GGUF models and Data Recipes (Export coming very soon). MLX training coming soon!
macOS 12 Monterey or newer (Intel or Apple Silicon)
Install Homebrew: here
Git:
brew install gitcmake:
brew install cmakeopenssl:
brew install opensslPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
Linux & WSL
Ubuntu 20.04+ or similar distro (64-bit)
NVIDIA GPU with drivers installed
CUDA toolkit (12.4+ recommended, 12.8+ for blackwell)
Git:
sudo apt install gitPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
Docker
Our Docker image now works for Studio! We're working on Mac compatibility.
Pull our latest Unsloth container image:
docker pull unsloth/unslothRun the container via:
For more information, see here.
Access your studio instance at
http://localhost:8000or external ip addresshttp://external_ip_address:8000/
CPU only
Unsloth Studio supports CPU devices for Chat for GGUF models and Data Recipes (Export coming very soon)
Same as the ones mentioned above for Linux (except for NVIDIA GPU drivers) and MacOS.
Developer Installation (Advanced)
Install from Main Repo
macOS, Linux, WSL developer installs:
Windows PowerShell developer installs:
Nightly Install
Nightly - MacOS, Linux, WSL:
Then to launch every time:
Nightly - Windows:
Run in Windows Powershell:
Then to launch every time:
Uninstall
To uninstall Unsloth Studio, follow these 4 steps:
1. Remove the application
MacOS, WSL, Linux:
rm -rf ~/.unsloth/studio/unsloth ~/.unsloth/studio/studioWindows (PowerShell):
Remove-Item -Recurse -Force "$HOME\.unsloth\studio\unsloth", "$HOME\.unsloth\studio\studio"
This removes the application but keeps your model checkpoints, exports, history, cache, and chats intact.
2. Remove shortcuts and symlinks
macOS:
Linux:
WSL / Windows (PowerShell):
3. Remove the CLI command
macOS, Linux, WSL:
Windows (PowerShell): The installer added the venv's Scripts directory to your User PATH. To remove it, open Settings → System → About → Advanced system settings → Environment Variables, find Path under User variables, and remove the entry pointing to .unsloth\studio\...\Scripts.
4. Remove everything (optional)
To also delete history, cache, chats, model checkpoints, and model exports, delete the entire Unsloth folder:
MacOS, WSL, Linux:
rm -rf ~/.unslothWindows (PowerShell):
Remove-Item -Recurse -Force "$HOME\.unsloth"
Note that downloaded HF model files are stored separately in the Hugging Face cache — none of the steps above will remove them. See Deleting model files below if you want to reclaim that disk space.
Note: Using the rm -rf commands will delete everything, including your history, cache, chats etc.
Deleting cached HF model files
You can delete old model files either from the bin icon in model search or by removing the relevant cached model folder from the default Hugging Face cache directory. By default, Hugging Face uses ~/.cache/huggingface/hub/ on macOS/Linux/WSL and C:\Users\<username>\.cache\huggingface\hub\ on Windows.
MacOS, Linux, WSL:
~/.cache/huggingface/hub/Windows:
%USERPROFILE%\.cache\huggingface\hub\
If HF_HUB_CACHE or HF_HOME is set, use that location instead. On Linux and WSL, XDG_CACHE_HOME can also change the default cache root.
Using old / existing GGUF models
Apr 1 update: You can now select an existing folder for Unsloth to detect from.
Mar 27 update: Unsloth Studio now automatically detects older / pre-existing models downloaded from Hugging Face, LM Studio etc.

Manual instructions: Unsloth Studio detects models downloaded to your Hugging Face Hub cache (C:\Users{your_username}.cache\huggingface\hub). If you have GGUF models downloaded through LM Studio, note that these are stored in C:\Users{your_username}.cache\lm-studio\models OR C:\Users{your_username}\lm-studio\models . Sometimes when they are not visible, you will need to move or copy those .gguf files into your Hugging Face Hub cache directory (or another path accessible to llama.cpp) for Unsloth Studio to load them.
After fine-tuning a model or adapter in Studio, you can export it to GGUF and run local inference with llama.cpp directly in Studio Chat. Unsloth Studio is powered by llama.cpp and Hugging Face.
Google Colab notebook
We’ve created a free Google Colab notebook so you can explore all of Unsloth’s features on Colab’s T4 GPUs. You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models. Just Click 'Run all' and the UI should pop up after installation.
Once installation is complete, scroll to Start Unsloth Studio and click Open Unsloth Studio in the white box shown on the left:
Scroll further down, to see the actual UI.

Sometimes the Studio link may return an error. This happens because you might have disabled cookies or you're using an adblocker or Mozilla. You can still access the UI by scrolling below the button.
Google Colab also expects you to stay on the Colab page; if it detects inactivity, it may shut down the GPU session.
Troubleshooting
Python version error
sudo apt install python3.12 python3.12-venv version 3.11 up to, but not including, 3.14
nvidia-smi not found
Install NVIDIA drivers from https://www.nvidia.com/Download/index.aspx
nvcc not found (CUDA)
sudo apt install nvidia-cuda-toolkit or add /usr/local/cuda/bin to PATH
llama-server build failed
Non-fatal, Studio still works, GGUF inference won't be available. Install cmake and re-run setup to fix.
cmake not found
sudo apt install cmake
git not found
sudo apt install git
Build failed
Delete ~/.unsloth/llama.cpp and re-run setup
Last updated
Was this helpful?


