Convert images using Z-Image Turbo Image-to-Image workflow in ComfyUI | Alpha | PandaiTech

Convert images using Z-Image Turbo Image-to-Image workflow in ComfyUI

Step-by-step tutorial on modifying the ComfyUI workflow to accept input images for restyling or refining 3D renders into realistic photos.

Learning Timeline
Key Insights

Understanding Denoise Strength

The 'denoise' value controls how much the original image influences the final result. A lower value (e.g., 0.4) keeps the structure very close to the uploaded image. A higher value (e.g., 0.8 - 1.0) gives the AI more creative freedom to deviate from the source.

Workflow Limitation

This workflow is strictly for 'Image-to-Image' conversion (re-styling or refining). It is not an 'Image Editor' driven by natural language instructions (like the upcoming Z-Image Edit model).
Prompts

Style Transfer Prompt Example

Target: ComfyUI CLIP Text Encode Node
realistic cinematic photo of a Chinese girl in a pink honu talking
Step by Step

Modifying the Workflow for Image-to-Image

  1. Double-click anywhere on the ComfyUI workspace background to open the node search bar.
  2. Type 'load image' and select the 'Load Image' node to add it to the workspace.
  3. Click the upload button within the 'Load Image' node and select your source image.
  4. Double-click the background again, search for 'VAE Encode', and add the 'VAE Encode' node.
  5. Click and drag the 'IMAGE' output point from the 'Load Image' node to the 'pixels' input point on the 'VAE Encode' node.
  6. Drag the 'LATENT' output point from the 'VAE Encode' node and connect it to the 'latent_image' input on the 'KSampler' node (this will automatically disconnect the previous 'Empty Latent Image' node).
  7. Select the now-disconnected 'Empty Latent Image' node and press 'Ctrl+B' (or select 'Bypass') to disable it.
  8. Locate the 'VAE' input on the new 'VAE Encode' node and connect it to your loaded VAE loader node (usually coming from the Checkpoint Loader).

Configuring Generation Parameters

  1. Navigate to the Positive Prompt node (CLIP Text Encode) and enter your description of the desired output style (e.g., 'realistic cinematic photo').
  2. Locate the 'denoise' setting within the 'KSampler' node.
  3. Set the 'denoise' value to '0.5' to start (this allows 50% influence from the original image).
  4. Click 'Queue Prompt' to run the generation.
  5. Review the result. If the output retains too much of the original style (e.g., still looks like 3D animation), increase the 'denoise' value (e.g., to '0.8').
  6. Click 'Queue Prompt' again to generate the refined image.

More from Generate & Edit Professional AI Images

View All