Comparing Models & Installing DeepSeek 32B in Open WebUI | Alpha | PandaiTech

Comparing Models & Installing DeepSeek 32B in Open WebUI

Techniques for comparing model responses side-by-side and how to install larger models (32B) via the terminal for more powerful performance.

Learning Timeline
Key Insights

Model Size vs. Speed (Hardware)

Larger models (like Llama 3.3 43GB or DeepSeek 32B) require significant processing power. If run on a standard laptop, they may generate responses very slowly (word by word) compared to 7B models, which are almost instantaneous.

DeepSeek R1 Special Feature

DeepSeek R1 features a 'Thinking' phase displayed inside brackets. Don't be confused; this is the model's logical reasoning process and is not part of the actual final response you requested.

Localhost vs. Cloud

Even though you are using Chrome or another browser, Open WebUI typically runs on 'Localhost:3000'. This means your data and models are processed entirely on your own computer, not on an external cloud server.
Prompts

AI Reasoning Capability Test

Target: DeepSeek R1
Give me 10 challenges only a reasoning AI model can solve
Step by Step

How to Select and Run DeepSeek R1 (7B)

  1. Log in to your Open WebUI account.
  2. Click on the model dropdown menu located at the top center of the screen.
  3. Find and select the 'deepseek-r1:latest' model (or the 7B version) from the list of available models.
  4. If you want to use this model as your primary choice, click the 'Set as default' option.
  5. Type your prompt into the chat box and press 'Send'.
  6. Pay attention to the 'Thinking' section within the brackets; this represents the model's reasoning process before it delivers the final answer.

Comparing Two AI Models Side-by-Side

  1. Ensure you have already sent a prompt to the first model (e.g., DeepSeek R1).
  2. Click the '+' icon or the model comparison button located at the top of the chat.
  3. Select a second model from the dropdown list (e.g., Llama 3.3) to include it in the same chat session.
  4. Press the 'Send' or 'Prompt Again' button.
  5. The screen will now split into a Side-by-Side view, allowing you to compare text generation speed and response quality between both models simultaneously.

How to Install the DeepSeek 32B Model via Terminal

  1. Open the Terminal app (Mac/Linux) or Command Prompt/PowerShell (Windows) on your computer.
  2. Copy and paste the installation command for the DeepSeek 32B model (e.g., `ollama run deepseek-r1:32b`).
  3. Press 'Enter' to begin downloading the file, which is approximately 19GB.
  4. Wait until the download and 'checksum' process is fully completed within the Terminal.
  5. Once finished, close the Terminal and return to your web browser.
  6. Access 'Localhost:3000' to open Open WebUI.
  7. Click on the model dropdown and select the newly installed '32B' model to start a new chat.

More from Boost Productivity & Research with AI

View All