Learning Timeline
Key Insights
Tips for Choosing a Model Based on VRAM
If your GPU has only 8GB of VRAM, choose the Q2 version of the model (roughly 7GB in size). For 12GB of VRAM, you can use mid-sized versions. If you have 16GB of VRAM or more, you can try less compressed versions for better quality.
The Trade-off Between Size and Quality
Using GGUF models (like Q2) will drastically reduce VRAM usage (from 40GB down to 7GB), though the output quality may appear slightly lower compared to the full model.
Update Regularly
Even if you already have the GGUF node installed, it is recommended to regularly 'Update' the node via the Manager to ensure compatibility with the latest models like Qwen imageEdit.
Step by Step
How to Install the GGUF Node in ComfyUI
- Click the 'Manager' button in the ComfyUI menu panel.
- Select 'Custom Nodes Manager' from the list of options that appears.
- In the search bar at the top, type 'gguf'.
- Find 'ComfyUI-GGUF' by city96 and click the 'Install' button.
- Fully restart ComfyUI after the installation is complete to load the new node.
Configuring the UNET Loader for GGUF Models
- Double-click on any empty area in the ComfyUI interface.
- Type 'Unet Loader' in the search box and select the 'Unet Loader (GGUF)' node.
- If you just added the model file to the models/unet folder, press 'R' on your keyboard to 'Refresh' the model list.
- On the Unet Loader (GGUF) node, click the 'unet_name' dropdown menu.
- Select the downloaded GGUF model (e.g., the Q2 GGUF model for the lowest VRAM usage).
- Connect the 'MODEL' output from that node to your image/video generation workflow.