lmslms ships with LM Studio, so you don't need to do any additional installation steps if you have LM Studio installed.
Just open a terminal window and run lms:
lms --help
lms is MIT Licensed and is developed in this repository on GitHub: https://github.com/lmstudio-ai/lms
| Command | Syntax | Docs |
|---|---|---|
| Chat in the terminal | lms chat | Guide |
| Download models | lms get | Guide |
| List your models | lms ls | Guide |
| See models loaded into memory | lms ps | Guide |
| Control the server | lms server start | Guide |
| Manage the inference runtime | lms runtime | Guide |
| Manage the headless daemon | lms daemon | Guide |
| Manage LM Link | lms link | Guide |
👉 You need to run LM Studio at least once before you can use lms.
Open a terminal window and run lms.
lms to automate and debug your workflowslms server start lms server stop
Learn more about lms server.
lms ls
Learn more about lms ls.
This will reflect the current LM Studio models directory, which you set in 📂 My Models tab in the app.
lms ps
Learn more about lms ps.
lms load [--gpu=max|auto|0.0-1.0] [--context-length=1-N]
--gpu=1.0 means 'attempt to offload 100% of the computation to the GPU'.
lms load openai/gpt-oss-20b --identifier="my-model-name"
This is useful if you want to keep the model identifier consistent.
lms unload [--all]
Learn more about lms load and unload.
This page's source is available on GitHub