arrow-down-to-squareUnsloth Studio Installation

Learn how to install Unsloth Studio on your local device.

Unsloth Studio works on Windows, Linux, WSL and MacOS. You should use the same installation process on every device, although the system requirements may differ by device.

windowsWindowsappleMacOSlinuxLinux & WSLdockerDockerscrewdriver-wrenchDeveloper Install

  • Mac: Like CPU - Chat + Data Recipes works for now. MLX training coming very soon.

  • CPU: Unsloth still works without a GPU, but for Chat + Data Recipes.

  • Training: Works on NVIDIA: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc. + Intel GPUs

  • Coming soon: Support for Apple MLX and AMD.

Install Instructions

Remember install instructions are the same across every device:

1

Install Unsloth

MacOS, Linux, WSL:

curl -fsSL https://unsloth.ai/install.sh | sh

Windows PowerShell:

irm https://unsloth.ai/install.ps1 | iex
circle-check
circle-info

WSL users: you will be prompted for your sudo password to install build dependencies (cmake, git, libcurl4-openssl-dev).

2

Launch Unsloth Studio

unsloth studio -H 0.0.0.0 -p 8888

Then open http://localhost:8888 in your browser.

3

Onboarding

On first launch you will need to create a password to secure your account and sign in again later. You’ll then see a brief onboarding wizard to choose a model, dataset, and basic settings. You can skip it at any time.

4

Start training and running

Start fine-tuning and building datasets immediately after launching. See our step-by-step guide to get started with Unsloth Studio:

boltGet Startedchevron-right

Update Unsloth Studio:

To update Unsloth Studio use:

unsloth studio update 

If that does not work, you can use the below:

MacOS, Linux, WSL:

curl -fsSL https://unsloth.ai/install.sh | sh

Windows PowerShell:

System Requirements

windows Windows

Unsloth Studio works directly on Windows without WSL. To train models, make sure your system satisfies these requirements:

Requirements

  • Windows 10 or Windows 11 (64-bit)

  • NVIDIA GPU with drivers installed

  • App Installer (includes winget): herearrow-up-right

  • Git: winget install --id Git.Git -e --source winget

  • Python: version 3.11 up to, but not including, 3.14

  • Work inside a Python environment such as uv, venv, or conda/mamba

apple MacOS

Unsloth Studio works on Mac devices for Chat for GGUF models and Data Recipes (Export coming very soon). MLX training coming soon!

  • macOS 12 Monterey or newer (Intel or Apple Silicon)

  • Install Homebrew: herearrow-up-right

  • Git: brew install git

  • cmake: brew install cmake

  • openssl: brew install openssl

  • Python: version 3.11 up to, but not including, 3.14

  • Work inside a Python environment such as uv, venv, or conda/mamba

linux Linux & WSL

  • Ubuntu 20.04+ or similar distro (64-bit)

  • NVIDIA GPU with drivers installed

  • CUDA toolkit (12.4+ recommended, 12.8+ for blackwell)

  • Git: sudo apt install git

  • Python: version 3.11 up to, but not including, 3.14

  • Work inside a Python environment such as uv, venv, or conda/mamba

docker Docker

circle-check
  • Pull our latest Unsloth container image: docker pull unsloth/unsloth

  • Run the container via:

For more information, see herearrow-up-right.

  • Access your studio instance at http://localhost:8000 or external ip address http://external_ip_address:8000/

microchip CPU only

Unsloth Studio supports CPU devices for Chat for GGUF models and Data Recipes (Export coming very soon)

  • Same as the ones mentioned above for Linux (except for NVIDIA GPU drivers) and MacOS.

Developer Installation (Advanced)

Install from Main Repo

macOS, Linux, WSL developer installs:

Windows PowerShell developer installs:

Nightly Install

Nightly - MacOS, Linux, WSL:

Then to launch every time:

Nightly - Windows:

Run in Windows Powershell:

Then to launch every time:

Uninstall

To uninstall Unsloth Studio, follow these 4 steps:

1. Remove the application

  • MacOS, WSL, Linux: rm -rf ~/.unsloth/studio/unsloth ~/.unsloth/studio/studio

  • Windows (PowerShell): Remove-Item -Recurse -Force "$HOME\.unsloth\studio\unsloth", "$HOME\.unsloth\studio\studio"

This removes the application but keeps your model checkpoints, exports, history, cache, and chats intact.

macOS:

Linux:

WSL / Windows (PowerShell):

3. Remove the CLI command

macOS, Linux, WSL:

Windows (PowerShell): The installer added the venv's Scripts directory to your User PATH. To remove it, open Settings → System → About → Advanced system settings → Environment Variables, find Path under User variables, and remove the entry pointing to .unsloth\studio\...\Scripts.

4. Remove everything (optional)

To also delete history, cache, chats, model checkpoints, and model exports, delete the entire Unsloth folder:

  • MacOS, WSL, Linux: rm -rf ~/.unsloth

  • Windows (PowerShell): Remove-Item -Recurse -Force "$HOME\.unsloth"

Note that downloaded HF model files are stored separately in the Hugging Face cache — none of the steps above will remove them. See Deleting model files below if you want to reclaim that disk space.

circle-exclamation

Deleting cached HF model files

You can delete old model files either from the bin icon in model search or by removing the relevant cached model folder from the default Hugging Face cache directory. By default, Hugging Face uses ~/.cache/huggingface/hub/ on macOS/Linux/WSL and C:\Users\<username>\.cache\huggingface\hub\ on Windows.

  • MacOS, Linux, WSL: ~/.cache/huggingface/hub/

  • Windows: %USERPROFILE%\.cache\huggingface\hub\

If HF_HUB_CACHE or HF_HOME is set, use that location instead. On Linux and WSL, XDG_CACHE_HOME can also change the default cache root.

Using old / existing GGUF models

Apr 1 update: You can now select an existing folder for Unsloth to detect from.

Mar 27 update: Unsloth Studio now automatically detects older / pre-existing models downloaded from Hugging Face, LM Studio etc.

Manual instructions: Unsloth Studio detects models downloaded to your Hugging Face Hub cache (C:\Users{your_username}.cache\huggingface\hub). If you have GGUF models downloaded through LM Studio, note that these are stored in C:\Users{your_username}.cache\lm-studio\models OR C:\Users{your_username}\lm-studio\models . Sometimes when they are not visible, you will need to move or copy those .gguf files into your Hugging Face Hub cache directory (or another path accessible to llama.cpp) for Unsloth Studio to load them.

After fine-tuning a model or adapter in Studio, you can export it to GGUF and run local inference with llama.cpp directly in Studio Chat. Unsloth Studio is powered by llama.cpp and Hugging Face.

google Google Colab notebook

We’ve created a free Google Colab notebookarrow-up-right so you can explore all of Unsloth’s features on Colab’s T4 GPUs. You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models. Just Click 'Run all' and the UI should pop up after installation.

Once installation is complete, scroll to Start Unsloth Studio and click Open Unsloth Studio in the white box shown on the left:

Scroll further down, to see the actual UI.

circle-exclamation

Troubleshooting

Problem
Fix

Python version error

sudo apt install python3.12 python3.12-venv version 3.11 up to, but not including, 3.14

nvidia-smi not found

Install NVIDIA drivers from https://www.nvidia.com/Download/index.aspx

nvcc not found (CUDA)

sudo apt install nvidia-cuda-toolkit or add /usr/local/cuda/bin to PATH

llama-server build failed

Non-fatal, Studio still works, GGUF inference won't be available. Install cmake and re-run setup to fix.

cmake not found

sudo apt install cmake

git not found

sudo apt install git

Build failed

Delete ~/.unsloth/llama.cpp and re-run setup

Last updated

Was this helpful?