ComfyUI for Beginners: A Visual Guide
This guide provides a comprehensive introduction to ComfyUI, covering everything from the user interface to basic workflows and key concepts. You'll learn how to navigate the interface, install custom nodes and models, and create your first AI-generated images.
What You'll Learn
Prerequisites
Before starting, ensure you have:
- ComfyUI installed (see our ComfyUI Installation Guide if you haven't already).
- A basic understanding of AI image generation concepts.
- A computer with a decent GPU (NVIDIA recommended).
If your hardware isn't powerful enough or you want to speed up generations, consider using RunPod's cloud GPU service.
Special Offer - $5 Credit Included!
When you sign up for RunPod using our affiliate link, you'll receive a $5 credit that can be used to generate up to 9,000 images and 300 videos. This gives you plenty of resources to explore ComfyUI and AI image/video generation without any upfront cost!
Understanding the ComfyUI Interface
Step 1: Navigating the ComfyUI Workspace
Familiarize yourself with the main components of the ComfyUI interface:
- Workflow Area: This is where you build your workflows by connecting nodes.
- Workflow Menu: Located at the top, allows you to open, save, and export workflows as JSON files. You can also browse pre-installed templates.
- Manager: Your go-to tool for installing missing custom nodes and updating ComfyUI.
- Queue: Displays your rendering history and allows you to start or cancel workflows.
- Node Library: Access all available nodes for building your workflows.
- Model Library: View all your installed AI models.
Key Interface Elements
- Workflows: Visual representations of AI processes.
- Nodes: Individual features or operations within a workflow.
- Custom Nodes: Features added to ComfyUI that don't exist in the default version.
Pro Tip
If you get lost in the workspace, press the Fit View button to return to your workflow.
Custom Nodes
The interface may look slightly different depending on whether you have custom nodes installed.
Pro Tip
The Workflow Area can be zoomed in and out using your mouse wheel.
Step 2: Managing Workflows
Learn how to load existing workflows and save your own:
- Open Workflow: Use the Workflow Menu to open a previously saved workflow.
- Drag and Drop: Drag and drop a JSON file into the workspace to load a workflow.
- Save Workflow: Use the Workflow Menu to save your current workflow as a JSON file.
- Browse Templates: Explore the pre-installed workflows in the Templates section of the Workflow Menu.
JSON Files
Workflows are saved as JSON files, which can be easily shared and loaded into ComfyUI.
Pro Tip
You can share workflows with others by sending them the JSON file. They can then drag and drop it into their ComfyUI workspace.
Missing Nodes
If you load a workflow with missing node types, use the Manager to install the required custom nodes.
Pro Tip
Back up your workflow JSON files regularly to avoid data loss.
Step 3: Installing Custom Nodes and Models
Extend ComfyUI's functionality by installing custom nodes and models:
- Open the Manager: Access the Manager from the ComfyUI interface.
- Install Missing Nodes: If you have a workflow with missing nodes, the Manager will list them. Check the boxes next to the missing nodes and press Install.
- Update ComfyUI: Use the Update All or Update ComfyUI buttons to keep ComfyUI up to date.
- Model Manager: Use the Model Manager to search for and install AI models.
The Manager
The Manager is your central hub for managing custom nodes, updating ComfyUI, and installing models.
Pro Tip
After installing custom nodes, you need to restart ComfyUI and refresh your browser for the changes to take effect.
Custom Node Updates
Update All updates both ComfyUI and your custom nodes, while Update ComfyUI only updates the base system.
Pro Tip
Regularly check for updates to both ComfyUI and your custom nodes to benefit from new features and bug fixes.
Step 4: Running Workflows
Execute your workflows to generate AI images:
- Start the Workflow: Press the Run button (usually located at the top).
- Generation Count: Set the number of generations you want to run.
- Run Modes: Choose between Run, Run Instant, and Run on Change.
- Clear/Cancel: Use the Clear button to cancel pending tasks and the Red X to cancel the current generation.
Run Modes
- Run: Generates the image once.
- Run Instant: Continuously generates images until stopped.
- Run on Change: Generates an image and then regenerates whenever a change is made to the workflow.
Pro Tip
Use Run Instant with caution, as it will continuously generate images and consume resources.
Workflow Location
The Run button can be dragged and placed anywhere in the workspace.
Pro Tip
Monitor your GPU usage while running workflows to avoid overloading your system.
Understanding Nodes and Connections
Step 5: Exploring Nodes
Learn about nodes and their role in ComfyUI:
- Node Library: Access the Node Library by clicking the button on the left or pressing N.
- Add Node: Right-click in the workspace and select Add Node to add a node.
- Node Connections: Connect nodes by dragging from an output on one node to an input on another.
What is a Node?
A node is a feature or operation that performs a specific task, such as loading an image, applying a prompt, or saving an image.
Pro Tip
Use rerouting nodes to clean up your workflow and simplify connections.
Node Connections
All nodes need to be connected in some way for the workflow to function correctly.
Pro Tip
Experiment with different nodes to discover their capabilities and how they can be combined.
Step 6: Understanding Connections
Understand the different types of connections and their color coding:
- Inputs and Outputs: Nodes have inputs on the left and outputs on the right. Data flows from left to right.
- Color Coding: Connections are color-coded to indicate the type of data being passed:
- Yellow: Clip (text converted to AI language)
- Red: VAE (Variational Autoencoder - universe portals)
- Blue: Images (in the RGB universe)
- Pink: Latent Data (in the AI universe)
- Orange: Conditioning (text and control data in the AI universe)
- Green: Simple Text (prompts, file paths)
- Purple: AI Model (safe tensors, checkpoints)
- Teal: Control Nets (controlling image output)
Connection Colors
The color of a connection indicates the type of data it carries, helping you understand the flow of information in your workflow.
Pro Tip
Pay attention to the color coding to ensure that you are connecting the correct types of data between nodes.
VAE Importance
The VAE (red) acts as a portal between our visible RGB universe and the AI's latent space.
Pro Tip
Use the color coding as a guide when troubleshooting connection issues.
Step 7: Building a Basic Text-to-Image Workflow
Create a simple text-to-image workflow:
- Load AI Model: Use a Load Checkpoint node to load an AI model (e.g., Stable Diffusion XL).
- Convert Text to AI Language: Use CLIP Text Encode nodes for both positive and negative prompts.
- Create Latent Image: Use an Empty Latent Image node to define the image size.
- K Sampler: Use a K Sampler node to generate the image in the latent space.
- VAE Decode: Use a VAE Decode node to convert the latent image back to the RGB universe.
- Save Image: Use a Save Image node to save the generated image.
Key Nodes
- Load Checkpoint: Loads the AI model.
- CLIP Text Encode: Converts text prompts into a format the AI can understand.
- K Sampler: The core image generation node.
- VAE Decode: Converts the AI's latent representation into a viewable image.
Pro Tip
Color-code your positive prompt node green and your negative prompt node red for easy identification.
Latent Space
The latent space is the AI's internal representation of images, where it manipulates and generates new content.
Pro Tip
Start with a simple prompt and gradually add complexity as you become more comfortable.
Understanding Key Parameters
Step 8: Exploring the K Sampler
Learn about the key parameters in the K Sampler node:
- Seed: Controls the initial noise pattern, influencing the generated image. Set to Randomize for different images each time, or Fixed to reproduce the same image.
- Steps: Determines how many iterations the AI will perform to refine the image. Higher values generally result in more detailed images.
- CFG Scale: Controls how closely the AI follows the prompt. Higher values result in images that more closely match the prompt, but can also reduce creativity.
- Sampler Name: Selects the sampling algorithm used to generate the image. Different samplers have different characteristics and may be better suited for certain models or prompts.
- Denoise: Controls how much the AI rebuilds the image from the initial noise. A value of 1.0 means the image is completely rebuilt, while 0.0 means no changes are made.
K Sampler Importance
The K Sampler is the heart of the image generation process, and understanding its parameters is crucial for controlling the output.
Pro Tip
Start with a Steps value of around 20 and adjust as needed.
Seed Control
Using a fixed seed allows you to experiment with different settings while maintaining a consistent base image.
Pro Tip
Experiment with different Sampler Names to see how they affect the generated image.
Step 9: Denoise and Image-to-Image
Explore the Denoise parameter for image-to-image generation:
- Load Image: Load an input image using a Load Image node.
- Encode to Latent: Use a VAE Encode node to convert the image to the latent space.
- Connect to K Sampler: Connect the latent representation to the K Sampler.
- Adjust Denoise: Set the Denoise value to control how much the AI rebuilds the image. Lower values preserve more of the original image, while higher values allow for more significant changes.
Image-to-Image
Image-to-image generation allows you to transform an existing image using AI, guided by a prompt and the Denoise parameter.
Pro Tip
Experiment with different Denoise values to find the right balance between preserving the original image and introducing new elements.
Model Dependency
The optimal Denoise value will vary depending on the AI model being used.
Pro Tip
Use a high-resolution input image for best results in image-to-image generation.
Step 10: Saving Images
Save your generated images:
- Save Image Node: Use the Save Image node to save your generated images.
- File Path: Specify the file path and name for your image. Adding a slash creates a subfolder in the output directory.
- Preview Image Node: Use a Preview Image node to display the image in the ComfyUI interface without saving it.
Output Directory
By default, images are saved to the ComfyUI/output folder.
Pro Tip
Use the Preview Image node to quickly view your images before saving them.
Dynamic File Paths
You can create dynamic file paths by including variables in the file name.
Pro Tip
Organize your output directory into subfolders to keep your generated images organized.
Congratulations!
You've successfully learned the basics of ComfyUI! You can now navigate the interface, install custom nodes and models, build basic workflows, and generate your own AI images.
Related Guides
- ComfyUI Installation Guide - Complete installation process for ComfyUI
- Running ComfyUI on RunPod - Run ComfyUI on cloud GPUs instead of local hardware
Next Steps
Now that you've completed this guide:
- Explore more advanced workflows and custom nodes.
- Experiment with different AI models and settings.
- Join the ComfyUI community to learn from other users.
