Setting up GPT-OSS models using LM Studio CLI | Alpha | PandaiTech

Setting up GPT-OSS models using LM Studio CLI

An alternative method for installing GPT-OSS models via LM Studio and the command line (terminal) for users looking for a different approach than Ollama.

Learning Timeline
Key Insights

Mandatory Requirement Before Using CLI

You MUST open the LM Studio application (GUI) at least once before attempting to run any 'lms' commands in the terminal; otherwise, the command will not be recognized.

Advantages of LM Studio vs Ollama

While Ollama is more user-friendly, LM Studio offers greater technical control and a visual interface (GUI) that makes it easier to manage and view your list of downloaded models.
Step by Step

How to Set Up GPT-OSS Models via LM Studio CLI

  1. Download and install the LM Studio application on your computer.
  2. Open the LM Studio app at least once. This is crucial to ensure the system recognizes 'lms' before you use the terminal.
  3. Locate the 'LM Studio CLI' command displayed within the application.
  4. Open Terminal (for Mac/Linux users) or Command Prompt (for Windows users).
  5. Copy the command and Paste it into the Terminal.
  6. Press 'Enter' to automatically start downloading the GPT-OSS model through the terminal.
  7. Once the download is complete, reopen the LM Studio application.
  8. Click the 'Discover' tab (magnifying glass icon) in the left navigation bar.
  9. Find the newly downloaded GPT-OSS model; it usually appears at the very top.
  10. Click on the model and select 'AI Chat' to start using the model locally.

More from Local AI & Open Source Deployment

View All