🧱ComfyUI Block
ComfyUI is an extremely powerful (Stable Diffusion) workflow builder. It has a node-based GUI and is for advanced users.
Advanced/Experimental Contact us or join Discord to report issues
Where to find it in the builder
Sample
There is a new button to edit the workflow in a preview!
You can drag your (API) graph .json
in here and run it!
Graph checks
✅ you need to have one
SaveImage
orVHS_VideoCombine
node, the engine will search for this and set it as thelast_node_id
.✅ all
loadImage
image
fields need to be valid image links (jpg
andpng
preferred). You can get these by uploading an image with the image uploading block. Downloading videos is not yet supported.
Compatible models
Expand to see all models and checkpoints
ComfyUI Glif Model List
Base Models
SD1.5 (Stable Diffusion 1.5)
v1-5-pruned-emaonly.ckpt
: Vanilla SD1.5 model
SDXL (Stable Diffusion XL)
sd_xl_base_1.0.safetensors
: Vanilla SDXL modelsd_xl_refiner_1.0.safetensors
: SDXL refiner (use only for refinement, not as base model)
SD2.1 (Stable Diffusion 2.1)
Fine-tuned Models
SD1.5-based
DreamShaper8_LCM.safetensors
: Fast universal model with LCM built-inRealistic_Vision_V5.1_fp16-no-ema.safetensors
: Photorealistic modelarthemyObjects_v10.safetensors
: General object-based model (not focused on human shapes)
SDXL-based
albedobaseXL_v12.safetensors
: Universally great SDXL modelfabricated_reality_sdxl_v14.safetensors
: Realistic SDXL model with improvements for eyes, teeth, and hands
ControlNet Models
SD1.5 Compatible
control_boxdepth_LooseControlfp16.safetensors
: Loose ControlNet modelcontrol_sd15_inpaint_depth_hand_fp16.safetensors
: Depth estimation for handscontrol_v11e_sd15_ip2p_fp16.safetensors
: Pixel to pixel instructioncontrol_v11e_sd15_shuffle_fp16.safetensors
: Image shufflingcontrol_v11f1e_sd15_tile_fp16.safetensors
: Image tilingcontrol_v11f1p_sd15_depth_fp16.safetensors
: Depth estimationcontrol_v11p_sd15_canny_fp16.safetensors
: Canny edge detectioncontrol_v11p_sd15_inpaint_fp16.safetensors
: Image inpaintingcontrol_v11p_sd15_lineart_fp16.safetensors
: Lineartcontrol_v11p_sd15_mlsd_fp16.safetensors
: Multi-level line segment detectioncontrol_v11p_sd15_normalbae_fp16.safetensors
: Surface normal estimationcontrol_v11p_sd15_openpose_fp16.safetensors
: Human pose estimationcontrol_v11p_sd15_scribble_fp16.safetensors
: Scribble-based image generationcontrol_v11p_sd15_seg_fp16.safetensors
: Segmentationcontrol_v11p_sd15_softedge_fp16.safetensors
: Soft edge image generationcontrol_v11p_sd15s2_lineart_anime_fp16.safetensors
: Anime line art generationcontrol_v11u_sd15_tile_fp16.safetensors
: Tile-based ControlNetcontrol_v1p_sd15_qrcode_monster.safetensors
: Creative QR code generationtemporalnetversion2.safetensors
: Enhance temporal consistency of generated outputs
SDXL Compatible
OpenPoseXL2.safetensors
: OpenPose model for SDXLcontrol-lora-canny-rank128.safetensors
: Canny edge detectioncontrol-lora-depth-rank128.safetensors
: Depth estimationcontrol-lora-recolor-rank128.safetensors
: Colorize black and white photographscontrol-lora-sketch-rank128-metadata.safetensors
: Color in drawingscontrol_v1p_sdxl_qrcode_monster.safetensors
: Creative QR code generation for SDXLcontrolnet-sd-xl-1.0-softedge-dexined.safetensors
: Soft edge preprocessingdepth-zoe-xl-v1.0-controlnet.safetensors
: Depth estimation
IP-Adapter Models
ip-adapter-full-face_sd15.safetensors
: Updated version of ip-adapter-face (SD1.5)ip-adapter-plus-face_sd15.safetensors
: Face-specific version of ip-adapter-plus (SD1.5)ip-adapter-plus-face_sdxl_vit-h.safetensors
: Face-specific version for SDXLip-adapter-plus_sd15.safetensors
: Enhanced version for SD1.5ip-adapter-plus_sdxl_vit-h.safetensors
: Enhanced version for SDXLip-adapter_sd15.safetensors
: Base version for SD1.5ip-adapter_sd15_light.safetensors
: Lightweight version for SD1.5ip-adapter_sd15_vit-G.safetensors
: ViT-G version for SD1.5ip-adapter_sdxl.safetensors
: Base version for SDXLip-adapter_sdxl_vit-h.safetensors
: ViT-H version for SDXL
LoRA Models
PS1Redmond-PS1Game-Playstation1Graphics.safetensors
: PlayStation 1 style (SDXL)PixelArtRedmond-Lite64.safetensors
: Pixel art style (SDXL)StickersRedmond.safetensors
: Sticker style (SDXL)add-detail-xl.safetensors
: Detail enhancer for SDXLadd_detail.safetensors
: Detail enhancer for SD1.5cereal_box_sdxl_v1.safetensors
: Cereal box cover style (SDXL)lcm-lora-sdv1-5.safetensors
: LCM for faster sampling (SD1.5)lcm-lora-sdxl.safetensors
: LCM for faster sampling (SDXL)more_details.safetensors
: Detail and composition enhancer (SD1.5)sd_xl_offset_example-lora_1.0.safetensors
: SNR improvement for SDXLsdxl_lightning_2step_lora.safetensors
: Fast text-to-image generation (SDXL)sdxl_lightning_4step_lora.safetensors
: Fast text-to-image generation (SDXL)sdxl_lightning_8step_lora.safetensors
: Fast text-to-image generation (SDXL)theovercomer8sContrastFix_sd15.safetensors
: Contrast improvement (SD1.5)theovercomer8sContrastFix_sd21768.safetensors
: Contrast improvement (SD2.1)
Important Note: You can add any LoRA from Hugging Face using the "HF Load LoRA" node in our ComfyUI instance. To use this node, you will need:
Repo ID (e.g., AP123/Example)
File ID (e.g., Lora.safetensors)
Upscale Models
4x_NMKD-Siax_200k.pth
: General upscalerESRGAN_4x.pth
: General upscalerRealESRGAN_x2.pth
: General upscalerRealESRGAN_x4.pth
: General upscaler
VAE (Variational Autoencoder) Models
kl-f8-anime2.ckpt
: Anime VAE (SD2.1)orangemix.vae.pt
: Anime VAE (SD1.5)sdxl_vae.safetensors
: Vanilla SDXL VAEvae-ft-mse-840000-ema-pruned.safetensors
: Smoother output VAE (SD1.5)
VAE Approximation Models
taesd_decoder.pth
: Preview decoder for SD1.xtaesd_encoder.pth
: Preview encoder for SD1.xtaesdxl_decoder.pth
: Preview decoder for SDXLtaesdxl_encoder.pth
: Preview encoder for SDXL
CLIP Vision Models
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
: ViT-H model, compatible with all IP-Adapter modelsCLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
: ViT-G model (referenced in IP-Adapters)clip-vit-large-patch14.bin
: ViT-L model (needed for styles model)clip_vision_g.safetensors
: Used for ReVision control LoRA model
SAM (Segment Anything Model) Models
mobile_sam.pt
: MobileSAMsam_vit_b_01ec64.pth
: Segment Anything (small)sam_vit_h_4b8939.pth
: Segment Anything (large)sam_vit_l_0b3195.pth
: Segment Anything (medium)
Embedding Models
bad_prompt_version2-neg.pt
: Negative prompt embedding for improving hands (SD1.5)easynegative.safetensors
: General negative prompt embedding (SD1.5)negative_hand-neg.pt
: Negative prompt embedding for hands (SD1.5)ng_deepnegative_v1_75t.pt
: General negative prompt embedding (SD1.5)
GLIGEN Model
gligen_sd14_textbox_pruned_fp16.safetensors
: Enables workflows with annotated bounding boxes (SD1.5)
Grounding DINO Models
GroundingDINO_SwinB.cfg.py
: Configuration fileGroundingDINO_SwinT_OGC.cfg.py
: Configuration file for SwinT OGCgroundingdino_swinb_cogcoor.pth
: GroundingDINO SwinB modelgroundingdino_swint_ogc.pth
: GroundingDINO SwinT OGC model
Background Removal Models
u2net.onnx
: Background removal modelu2net_human_seg.onnx
: Human segmentation model for background removal
Ultralytics Models
face_yolov8m.pt
: YOLO face detectorhand_yolov8s.pt
: YOLO hand detectorperson_yolov8m-seg.pt
: YOLO person segmentation model
AnimateDiff Models
Various models for generating and controlling animated outputs, including LongAnimateDiff and motion modules
Layer Diffusion Models
Models for applying layer diffusion to base models, including attention and convolution-based models for both SD1.5 and SDXL
Notes
Some models may have multiple versions or variants available.
Please refer to the specific model's documentation for usage instructions and compatibility information.
This list is subject to updates as new models are added or existing ones are modified.
Plugins
Expand to see all plugins
ComfyUI Custom Nodes List
This list contains all the custom nodes installed in our ComfyUI instance. Each entry includes the GitHub repository reference and the specific commit used.
ComfyUI (Base)
Reference: https://github.com/comfyanonymous/ComfyUI
Commit: 45ec1cbe963055798765645c4f727122a7d3e35e
ControlNet Auxiliary Preprocessors
Reference: https://github.com/Fannovel16/comfyui_controlnet_aux
Commit: c0b33402d9cfdc01c4e0984c26e5aadfae948e05
Ultimate SD Upscale
Reference: https://github.com/ssitu/ComfyUI_UltimateSDUpscale
Commit: b303386bd363df16ad6706a13b3b47a1c2a1ea49
FizzNodes
Reference: https://github.com/FizzleDorf/ComfyUI_FizzNodes
Commit: fd2165162ed939d3c23ab6c63f206ae93457aad8
Video Helper Suite
Reference: https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
Commit: e369cac458f977fab0ee5719ce5e4057dc04729f
Inspire Pack
Reference: https://github.com/ltdrdata/ComfyUI-Inspire-Pack
Commit: 985f6a239b1aed0c67158f64bf579875ec292cb2
Frame Interpolation
Reference: https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
Commit: 5e11679995c68f33891c306a393915feefe234b5
ComfyMath
Reference: https://github.com/evanspearman/ComfyMath
Commit: be9beab9923ccf5c5e4132dc1653bcdfa773ed70
smZNodes
Reference: https://github.com/shiimizu/ComfyUI_smZNodes
Commit: 378ed4567f3290823d5dc5e9556c7d742dc82d23
Advanced ControlNet
Reference: https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet
Commit: 33d9884b76e8d7a2024691c5d98308e7e61bf38d
Segment Anything
Reference: https://github.com/storyicon/comfyui_segment_anything
Commit: ab6395596399d5048639cdab7e44ec9fae857a93
Impact Pack
Reference: https://github.com/ltdrdata/ComfyUI-Impact-Pack
Commit: 971c4a37aa4e77346eaf0ab80adf3972f430bec1
WAS Node Suite
Reference: https://github.com/WASasquatch/was-node-suite-comfyui
Commit: 6c3fed70655b737dc9b59da1cadb3c373c08d8ed
ComfyUI Essentials
Reference: https://github.com/cubiq/ComfyUI_essentials
Commit: bd9b89b7c924302e14bb353b87c3373af447bf55
KJNodes
Reference: https://github.com/kijai/ComfyUI-KJNodes
Commit: d25604536e88b42459cf7ead9a1306271ed7fe6f
Comfyroll Custom Nodes
Reference: https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes
Commit: d78b780ae43fcf8c6b7c6505e6ffb4584281ceca
AnimateDiff Evolved
Reference: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
Commit: f9e0343f4c4606ee6365a9af4a7e16118f1c45e1
IPAdapter Plus
Reference: https://github.com/cubiq/ComfyUI_IPAdapter_plus
Commit: 0d0a7b3693baf8903fe2028ff218b557d619a93d
GlifNodes
Reference: https://github.com/glifxyz/ComfyUI-GlifNodes
Commit: 5d7e5c80aa175fb6ac860a6d63a15d1f699023fc
Moondream
Reference: https://github.com/kijai/ComfyUI-moondream
Commit: b97ad4718821d7cee5eacce139c94c9de51268b8
Layer Diffuse
Reference: https://github.com/huchenlei/ComfyUI-layerdiffuse
Commit: 151f7460bbc9d7437d4f0010f21f80178f7a84a6
rgthree-comfy
Reference: https://github.com/rgthree/rgthree-comfy
Commit: db062961ed4a3cd92f4eb2b8eeedbcc742b5d5e9
ComfyUI-SUPIR
Reference: https://github.com/kijai/ComfyUI-SUPIR
Commit: 656e55e8154a2cdfa0738a3474c8aa8e02113e66
Author: kaijai
Description: SUPIR superresolution
Note:
Please refer to the respective GitHub repositories for detailed installation and usage instructions.
This list is subject to updates as new nodes are added or existing ones are modified.
Exporting the graph
Enable developer options:
Go to settings:
Enable
Enable Dev mode Options
:
Export the API graph:
Graph example
This is a graph that should run via our Comfy Block. Mind the cloudinary link for the image field.
expand to show json
{
"3": {
"inputs": {
"seed": 624586032019704,
"steps": 4,
"cfg": 1,
"sampler_name": "lcm",
"scheduler": "normal",
"denoise": 1,
"model": [
"13",
0
],
"positive": [
"6",
0
],
"negative": [
"7",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "KSampler"
},
"4": {
"inputs": {
"ckpt_name": "sd_xl_base_1.0.safetensors"
},
"class_type": "CheckpointLoaderSimple"
},
"5": {
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
},
"class_type": "EmptyLatentImage"
},
"6": {
"inputs": {
"text": "man in space",
"clip": [
"10",
1
]
},
"class_type": "CLIPTextEncode"
},
"7": {
"inputs": {
"text": "text, watermark",
"clip": [
"10",
1
]
},
"class_type": "CLIPTextEncode"
},
"8": {
"inputs": {
"samples": [
"3",
0
],
"vae": [
"4",
2
]
},
"class_type": "VAEDecode"
},
"9": {
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
},
"class_type": "SaveImage"
},
"10": {
"inputs": {
"lora_name": "lcm-lora-sdxl.safetensors",
"strength_model": 1,
"strength_clip": 1,
"model": [
"4",
0
],
"clip": [
"4",
1
]
},
"class_type": "LoraLoader"
},
"12": {
"inputs": {
"ipadapter_file": "ip-adapter-plus_sdxl_vit-h.safetensors"
},
"class_type": "IPAdapterModelLoader"
},
"13": {
"inputs": {
"weight": 0.3,
"noise": 0,
"weight_type": "original",
"start_at": 0,
"end_at": 1,
"unfold_batch": false,
"ipadapter": [
"12",
0
],
"clip_vision": [
"15",
0
],
"image": [
"16",
0
],
"model": [
"10",
0
]
},
"class_type": "IPAdapterApply"
},
"15": {
"inputs": {
"clip_name": "vit-h-image-encoder.safetensors"
},
"class_type": "CLIPVisionLoader"
},
"16": {
"inputs": {
"image": "https://res.cloudinary.com/dzkwltgyd/image/upload/v1699969928/image-input-block-production/rcvznyr9hhewaf9tdnts.jpg",
"choose file to upload": "image"
},
"class_type": "LoadImage"
}
}
Example workflows
AnimateDiff
This is a basic AnimateDiff (original repo) workflow based on SD15:
GIF output | Workflow image (drop this in Comfy) |
---|---|
From here on, you could:
Add motion LoRAs to control the motion
Use an upscaler to make it higher res
Use LCM LoRA to make things faster
Read how the context options might work: link
IPAdapter with TileControlnet
Txt2Image with SDXL + Upscaler
This workflow uses SDXL to create a base image and then the UltimateSD upscale block. The UltimateSD upscale block works best with a tile controlnet. Therefore, we load in a SD15 checkpoint.
SDXL output (1K) | Upscaled (2K) |
---|---|
Last updated