🥞
Glif Docs/Guide
  • Getting Started
    • 👋What is Glif?
    • 💡What can I do with a glif?
      • 🏃Run a glif
      • 🔌Build a glif
      • 🔀Remix a glif
      • 🗣️Comment on a glif
      • 🔳Embed a glif
    • ⚒️How do I build a glif?
      • 📽️Video tutorial: Building a simple image generator
      • 🟰Using variables
    • ⚙️Profile Settings
    • 🪙Credits and Payments
    • ❓FAQs
  • Blocks
    • 🙋Inputs
      • ✍️Text Input Block
      • 🖼️Image Input Block
      • 📋Multipick Block
    • 🪄Generators
      • 📃Text Block
      • 🖼️Image Block
      • ➡️Image to Text Block
        • Florence2Sam2Segmenter
    • 🧰Tools
      • 🔀Text Combiner Block
      • 🔬JSON Extractor Block
    • 💅Styling
      • 🎨HTML Block
      • 🖼️Canvas Block
    • 🧑‍🔬Advanced/Experimental
      • 🎙️Audio Input Block
      • ↔️Glif Block
      • 🌐Web Fetcher Block
      • 🔊Audio Spell
      • 🧱ComfyUI Block
      • 📡Audio to Text Block
      • 🎥Video Input Block
      • 🔧JSON Repair Block
  • Apps
    • 🎨Glif It! Browser Extension
  • Glif University
    • 🎥Video Tutorials
      • 🐲How To: D&D Character Sheet Generator
      • 🧠How To: Expanding Brain Meme Generator
      • 🦑How To: Occult Memelord Generator
      • 🥸How To: InstantID Portrait Restyle Glif
      • 🕺How To: Style and Pose a Character with InstantID + Controlnet
      • 😱How To: Create a Simple Cartoon Portrait Animation Glif (LivePortrait + Custom Blocks)
      • 👗How to Create a Clothing Restyler App (IP Adapter, ControlNet + GPT Vision)
      • 🤡How to Create a 4+ Panel Storyboard/Comic (Flux Schnell)
      • 🎂How to Create a Recipe Generator with Accompanying Pictures
      • How to Use JasperAI Depth Controlnet on Flux Dev
      • 🦸‍♂️How to Make a Consistent Comic Panel Generator
    • 🧑‍🏫Prompt Engineering 101
    • 🖼️ControlNet
    • 📚AI Glossary
  • API - for Developers
    • ⚡Running glifs via the API
    • 🤖Using AI Assistants to build with the Glif API
    • 📙Reading & writing data via the API
    • 🗾Glif Graph JSON Schema
    • 📫Embed player & custom webpages
    • 📫Sample code
    • ❓What can I make with the Glif API?
      • Browser Extensions
      • Discord Bots
      • Games
      • Social Media Bots
      • Experimental Projects
  • Policies
    • 👨‍👩‍👧‍👦Community Guidelines
  • Programs
    • 🖼️Loradex Trainer Program
  • Community Resources
    • 🧑‍🤝‍🧑Resources Created by Glif Community Members
  • Contact Us
    • 📣Send us your feedback
    • 🚔Information for law enforcement
Powered by GitBook
On this page
  • Where to find it in the builder
  • Sample
  • Graph checks
  • Plugins
  • Exporting the graph
  • Graph example
  • Example workflows
  • AnimateDiff
  • IPAdapter with TileControlnet
  • Txt2Image with SDXL + Upscaler
  1. Blocks
  2. Advanced/Experimental

ComfyUI Block

ComfyUI is an extremely powerful (Stable Diffusion) workflow builder. It has a node-based GUI and is for advanced users.

PreviousAudio SpellNextAudio to Text Block

Last updated 3 months ago

Advanced/Experimental or to report issues

Where to find it in the builder

Sample

There is a new button to edit the workflow in a preview!

You can drag your (API) graph .json in here and run it!

Graph checks

  • ✅ you need to have one SaveImage or VHS_VideoCombine node, the engine will search for this and set it as the last_node_id.

  • ✅ all loadImage image fields need to be valid image links (jpg and png preferred). You can get these by uploading an image with the image uploading block. Downloading videos is not yet supported.

Plugins

Expand to see all plugins

ComfyUI Custom Nodes List

This list contains all the custom nodes installed in our ComfyUI instance. Each entry includes the GitHub repository reference and the specific commit used.

ComfyUI (Base)

  • Reference: https://github.com/comfyanonymous/ComfyUI

  • Commit: 45ec1cbe963055798765645c4f727122a7d3e35e

ControlNet Auxiliary Preprocessors

  • Reference: https://github.com/Fannovel16/comfyui_controlnet_aux

  • Commit: c0b33402d9cfdc01c4e0984c26e5aadfae948e05

Ultimate SD Upscale

  • Reference: https://github.com/ssitu/ComfyUI_UltimateSDUpscale

  • Commit: b303386bd363df16ad6706a13b3b47a1c2a1ea49

FizzNodes

  • Reference: https://github.com/FizzleDorf/ComfyUI_FizzNodes

  • Commit: fd2165162ed939d3c23ab6c63f206ae93457aad8

Video Helper Suite

  • Reference: https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite

  • Commit: e369cac458f977fab0ee5719ce5e4057dc04729f

Inspire Pack

  • Reference: https://github.com/ltdrdata/ComfyUI-Inspire-Pack

  • Commit: 985f6a239b1aed0c67158f64bf579875ec292cb2

Frame Interpolation

  • Reference: https://github.com/Fannovel16/ComfyUI-Frame-Interpolation

  • Commit: 5e11679995c68f33891c306a393915feefe234b5

ComfyMath

  • Reference: https://github.com/evanspearman/ComfyMath

  • Commit: be9beab9923ccf5c5e4132dc1653bcdfa773ed70

smZNodes

  • Reference: https://github.com/shiimizu/ComfyUI_smZNodes

  • Commit: 378ed4567f3290823d5dc5e9556c7d742dc82d23

Advanced ControlNet

  • Reference: https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet

  • Commit: 33d9884b76e8d7a2024691c5d98308e7e61bf38d

Segment Anything

  • Reference: https://github.com/storyicon/comfyui_segment_anything

  • Commit: ab6395596399d5048639cdab7e44ec9fae857a93

Impact Pack

  • Reference: https://github.com/ltdrdata/ComfyUI-Impact-Pack

  • Commit: 971c4a37aa4e77346eaf0ab80adf3972f430bec1

WAS Node Suite

  • Reference: https://github.com/WASasquatch/was-node-suite-comfyui

  • Commit: 6c3fed70655b737dc9b59da1cadb3c373c08d8ed

ComfyUI Essentials

  • Reference: https://github.com/cubiq/ComfyUI_essentials

  • Commit: bd9b89b7c924302e14bb353b87c3373af447bf55

KJNodes

  • Reference: https://github.com/kijai/ComfyUI-KJNodes

  • Commit: d25604536e88b42459cf7ead9a1306271ed7fe6f

Comfyroll Custom Nodes

  • Reference: https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes

  • Commit: d78b780ae43fcf8c6b7c6505e6ffb4584281ceca

AnimateDiff Evolved

  • Reference: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved

  • Commit: f9e0343f4c4606ee6365a9af4a7e16118f1c45e1

IPAdapter Plus

  • Reference: https://github.com/cubiq/ComfyUI_IPAdapter_plus

  • Commit: 0d0a7b3693baf8903fe2028ff218b557d619a93d

GlifNodes

  • Reference: https://github.com/glifxyz/ComfyUI-GlifNodes

  • Commit: 5d7e5c80aa175fb6ac860a6d63a15d1f699023fc

Moondream

  • Reference: https://github.com/kijai/ComfyUI-moondream

  • Commit: b97ad4718821d7cee5eacce139c94c9de51268b8

Layer Diffuse

  • Reference: https://github.com/huchenlei/ComfyUI-layerdiffuse

  • Commit: 151f7460bbc9d7437d4f0010f21f80178f7a84a6

rgthree-comfy

  • Reference: https://github.com/rgthree/rgthree-comfy

  • Commit: db062961ed4a3cd92f4eb2b8eeedbcc742b5d5e9

ComfyUI-SUPIR

  • Reference: https://github.com/kijai/ComfyUI-SUPIR

  • Commit: 656e55e8154a2cdfa0738a3474c8aa8e02113e66

  • Author: kaijai

  • Description: SUPIR superresolution

Note:

  • Please refer to the respective GitHub repositories for detailed installation and usage instructions.

  • This list is subject to updates as new nodes are added or existing ones are modified.

Exporting the graph

  1. Enable developer options:

    1. Go to settings:

    2. Enable Enable Dev mode Options:

  2. Export the API graph:

Graph example

This is a graph that should run via our Comfy Block. Mind the cloudinary link for the image field.

expand to show json
{
  "3": {
    "inputs": {
      "seed": 624586032019704,
      "steps": 4,
      "cfg": 1,
      "sampler_name": "lcm",
      "scheduler": "normal",
      "denoise": 1,
      "model": [
        "13",
        0
      ],
      "positive": [
        "6",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "5",
        0
      ]
    },
    "class_type": "KSampler"
  },
  "4": {
    "inputs": {
      "ckpt_name": "sd_xl_base_1.0.safetensors"
    },
    "class_type": "CheckpointLoaderSimple"
  },
  "5": {
    "inputs": {
      "width": 1024,
      "height": 1024,
      "batch_size": 1
    },
    "class_type": "EmptyLatentImage"
  },
  "6": {
    "inputs": {
      "text": "man in space",
      "clip": [
        "10",
        1
      ]
    },
    "class_type": "CLIPTextEncode"
  },
  "7": {
    "inputs": {
      "text": "text, watermark",
      "clip": [
        "10",
        1
      ]
    },
    "class_type": "CLIPTextEncode"
  },
  "8": {
    "inputs": {
      "samples": [
        "3",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode"
  },
  "9": {
    "inputs": {
      "filename_prefix": "ComfyUI",
      "images": [
        "8",
        0
      ]
    },
    "class_type": "SaveImage"
  },
  "10": {
    "inputs": {
      "lora_name": "lcm-lora-sdxl.safetensors",
      "strength_model": 1,
      "strength_clip": 1,
      "model": [
        "4",
        0
      ],
      "clip": [
        "4",
        1
      ]
    },
    "class_type": "LoraLoader"
  },
  "12": {
    "inputs": {
      "ipadapter_file": "ip-adapter-plus_sdxl_vit-h.safetensors"
    },
    "class_type": "IPAdapterModelLoader"
  },
  "13": {
    "inputs": {
      "weight": 0.3,
      "noise": 0,
      "weight_type": "original",
      "start_at": 0,
      "end_at": 1,
      "unfold_batch": false,
      "ipadapter": [
        "12",
        0
      ],
      "clip_vision": [
        "15",
        0
      ],
      "image": [
        "16",
        0
      ],
      "model": [
        "10",
        0
      ]
    },
    "class_type": "IPAdapterApply"
  },
  "15": {
    "inputs": {
      "clip_name": "vit-h-image-encoder.safetensors"
    },
    "class_type": "CLIPVisionLoader"
  },
  "16": {
    "inputs": {
      "image": "https://res.cloudinary.com/dzkwltgyd/image/upload/v1699969928/image-input-block-production/rcvznyr9hhewaf9tdnts.jpg",
      "choose file to upload": "image"
    },
    "class_type": "LoadImage"
  }
}

Example workflows

AnimateDiff

GIF output
Workflow image (drop this in Comfy)

From here on, you could:

  • Add motion LoRAs to control the motion

  • Use an upscaler to make it higher res

  • Use LCM LoRA to make things faster

IPAdapter with TileControlnet

Txt2Image with SDXL + Upscaler

This workflow uses SDXL to create a base image and then the UltimateSD upscale block. The UltimateSD upscale block works best with a tile controlnet. Therefore, we load in a SD15 checkpoint.

SDXL output (1K)
Upscaled (2K)

This is a basic AnimateDiff () workflow based on SD15:

Read how the context options might work:

🧑‍🔬
🧱
original repo
link
Contact us
join Discord
ComfUI AnimateDiff
ComfyUI UltimateSDUpscale