Installing and Configuring LiteLLM Proxy with Docker | Alpha | PandaiTech

Installing and Configuring LiteLLM Proxy with Docker

A technical walkthrough on installing LiteLLM via Docker, editing environment variables, and setting up 'Virtual Keys' to run Claude, Grok, and GPT together while staying within budget.

Learning Timeline
Key Insights

API Key Security

LITELLM_SALT_KEY is crucial because it is used to encrypt the provider API keys (such as Claude/GPT) that you store in the LiteLLM database. Do not change this key once the server is in use.

Advantages of Virtual Keys

Use Virtual Keys to segment access for different users. You can restrict specific users from using expensive models and set a 'Monthly Budget' to prevent bill spikes.

Localhost Networking

If Open WebUI and LiteLLM are on the same Docker network, you can use 'http://localhost:4000' or the container name as the Base URL for faster communication.
Step by Step

Step 1: Server Preparation and LiteLLM Installation

  1. Access your server terminal or VPS (e.g., via the Browser Terminal in your VPS portal).
  2. Clone the LiteLLM repository from GitHub onto your server.
  3. Type 'ls' to see the folder list, then type 'cd litellm' to enter the directory.
  4. Type 'nano .env' to open the text editor and edit the hidden configuration file.
  5. Add the line 'LLM_MASTER_KEY="SK-RANDOMLY_GENERATED_KEY"' (Use a password generator for a secure key).
  6. Add the line 'LITELLM_SALT_KEY="SK-RANDOMLY_GENERATED_KEY"' to encrypt your API credentials.
  7. Press 'Ctrl + X', then 'Y', and 'Enter' to save the changes.
  8. Run the command 'docker-compose up -d' to build and run the LiteLLM container in the background.
  9. Type 'docker ps' to ensure the container status is 'healthy' and running.

Step 2: Configuring AI Models in the Admin Panel

  1. Obtain an API Key from your preferred AI provider (OpenAI, Anthropic, or X.ai for Grok).
  2. Open your browser and navigate to your server's IP address on port 4000 (e.g., http://123.456.78.90:4000).
  3. Click on the 'LiteLLM Admin Panel UI' button.
  4. Log in using the username 'admin' and the 'LLM_MASTER_KEY' you set in the .env file earlier.
  5. Select a provider (e.g., Anthropic), then choose a specific model (e.g., Claude 3.7) or select 'All Models'.
  6. Enter the provider's API Key into the designated field and click 'Add Model'.
  7. Repeat this process for other providers like Grok or OpenAI.

Step 3: Setting Up Virtual Keys and Budget Controls

  1. Click on the 'Virtual Keys' menu on the top left of the LiteLLM dashboard.
  2. Click 'Create New Key'.
  3. Give the key a name (e.g., 'Kids-Access') for easier monitoring.
  4. Select which models are allowed to be accessed by this key (e.g., Claude 3.7 only).
  5. Open the 'Optional Settings' section to set a budget limit (e.g., Max monthly budget of $20).
  6. Click 'Generate' and copy the resulting Virtual Key. Store it in a safe place.

Step 4: Integrating with Open WebUI

  1. Open your Open WebUI interface.
  2. Go to 'Settings' and look for the 'OpenAI API' configuration.
  3. In the 'Base URL' field, enter 'http://localhost:4000' (If Open WebUI and LiteLLM are on the same server).
  4. Enter the Virtual Key you generated from LiteLLM into the 'API Key' field.
  5. Click 'Verify Connection' to ensure the connection is successful.
  6. Click 'Save' and you can now select Claude, Grok, or GPT models directly from the Open WebUI chat interface.

More from Local AI & Open Source Deployment

View All