π§±ComfyUI Block
ComfyUI is an extremely powerful (Stable Diffusion) workflow builder. It has a node-based GUI and is for advanced users.
Last updated
ComfyUI is an extremely powerful (Stable Diffusion) workflow builder. It has a node-based GUI and is for advanced users.
Last updated
Advanced/Experimental Contact us or join Discord to report issues
There is a new button to edit the workflow in a preview!
You can drag your (API) graph .json
in here and run it!
β
you need to have one SaveImage
or VHS_VideoCombine
node, the engine will search for this and set it as the last_node_id
.
β
all loadImage
image
fields need to be valid image links (jpg
and png
preferred). You can get these by uploading an image with the image uploading block. Downloading videos is not yet supported.
SD1.5 (Stable Diffusion 1.5)
v1-5-pruned-emaonly.ckpt
: Vanilla SD1.5 model
SDXL (Stable Diffusion XL)
sd_xl_base_1.0.safetensors
: Vanilla SDXL model
sd_xl_refiner_1.0.safetensors
: SDXL refiner (use only for refinement, not as base model)
SD2.1 (Stable Diffusion 2.1)
SD1.5-based
DreamShaper8_LCM.safetensors
: Fast universal model with LCM built-in
Realistic_Vision_V5.1_fp16-no-ema.safetensors
: Photorealistic model
arthemyObjects_v10.safetensors
: General object-based model (not focused on human shapes)
SDXL-based
albedobaseXL_v12.safetensors
: Universally great SDXL model
fabricated_reality_sdxl_v14.safetensors
: Realistic SDXL model with improvements for eyes, teeth, and hands
SD1.5 Compatible
control_boxdepth_LooseControlfp16.safetensors
: Loose ControlNet model
control_sd15_inpaint_depth_hand_fp16.safetensors
: Depth estimation for hands
control_v11e_sd15_ip2p_fp16.safetensors
: Pixel to pixel instruction
control_v11e_sd15_shuffle_fp16.safetensors
: Image shuffling
control_v11f1e_sd15_tile_fp16.safetensors
: Image tiling
control_v11f1p_sd15_depth_fp16.safetensors
: Depth estimation
control_v11p_sd15_canny_fp16.safetensors
: Canny edge detection
control_v11p_sd15_inpaint_fp16.safetensors
: Image inpainting
control_v11p_sd15_lineart_fp16.safetensors
: Lineart
control_v11p_sd15_mlsd_fp16.safetensors
: Multi-level line segment detection
control_v11p_sd15_normalbae_fp16.safetensors
: Surface normal estimation
control_v11p_sd15_openpose_fp16.safetensors
: Human pose estimation
control_v11p_sd15_scribble_fp16.safetensors
: Scribble-based image generation
control_v11p_sd15_seg_fp16.safetensors
: Segmentation
control_v11p_sd15_softedge_fp16.safetensors
: Soft edge image generation
control_v11p_sd15s2_lineart_anime_fp16.safetensors
: Anime line art generation
control_v11u_sd15_tile_fp16.safetensors
: Tile-based ControlNet
control_v1p_sd15_qrcode_monster.safetensors
: Creative QR code generation
temporalnetversion2.safetensors
: Enhance temporal consistency of generated outputs
SDXL Compatible
OpenPoseXL2.safetensors
: OpenPose model for SDXL
control-lora-canny-rank128.safetensors
: Canny edge detection
control-lora-depth-rank128.safetensors
: Depth estimation
control-lora-recolor-rank128.safetensors
: Colorize black and white photographs
control-lora-sketch-rank128-metadata.safetensors
: Color in drawings
control_v1p_sdxl_qrcode_monster.safetensors
: Creative QR code generation for SDXL
controlnet-sd-xl-1.0-softedge-dexined.safetensors
: Soft edge preprocessing
depth-zoe-xl-v1.0-controlnet.safetensors
: Depth estimation
ip-adapter-full-face_sd15.safetensors
: Updated version of ip-adapter-face (SD1.5)
ip-adapter-plus-face_sd15.safetensors
: Face-specific version of ip-adapter-plus (SD1.5)
ip-adapter-plus-face_sdxl_vit-h.safetensors
: Face-specific version for SDXL
ip-adapter-plus_sd15.safetensors
: Enhanced version for SD1.5
ip-adapter-plus_sdxl_vit-h.safetensors
: Enhanced version for SDXL
ip-adapter_sd15.safetensors
: Base version for SD1.5
ip-adapter_sd15_light.safetensors
: Lightweight version for SD1.5
ip-adapter_sd15_vit-G.safetensors
: ViT-G version for SD1.5
ip-adapter_sdxl.safetensors
: Base version for SDXL
ip-adapter_sdxl_vit-h.safetensors
: ViT-H version for SDXL
PS1Redmond-PS1Game-Playstation1Graphics.safetensors
: PlayStation 1 style (SDXL)
PixelArtRedmond-Lite64.safetensors
: Pixel art style (SDXL)
StickersRedmond.safetensors
: Sticker style (SDXL)
add-detail-xl.safetensors
: Detail enhancer for SDXL
add_detail.safetensors
: Detail enhancer for SD1.5
cereal_box_sdxl_v1.safetensors
: Cereal box cover style (SDXL)
lcm-lora-sdv1-5.safetensors
: LCM for faster sampling (SD1.5)
lcm-lora-sdxl.safetensors
: LCM for faster sampling (SDXL)
more_details.safetensors
: Detail and composition enhancer (SD1.5)
sd_xl_offset_example-lora_1.0.safetensors
: SNR improvement for SDXL
sdxl_lightning_2step_lora.safetensors
: Fast text-to-image generation (SDXL)
sdxl_lightning_4step_lora.safetensors
: Fast text-to-image generation (SDXL)
sdxl_lightning_8step_lora.safetensors
: Fast text-to-image generation (SDXL)
theovercomer8sContrastFix_sd15.safetensors
: Contrast improvement (SD1.5)
theovercomer8sContrastFix_sd21768.safetensors
: Contrast improvement (SD2.1)
Important Note: You can add any LoRA from Hugging Face using the "HF Load LoRA" node in our ComfyUI instance. To use this node, you will need:
Repo ID (e.g., AP123/Example)
File ID (e.g., Lora.safetensors)
4x_NMKD-Siax_200k.pth
: General upscaler
ESRGAN_4x.pth
: General upscaler
RealESRGAN_x2.pth
: General upscaler
RealESRGAN_x4.pth
: General upscaler
kl-f8-anime2.ckpt
: Anime VAE (SD2.1)
orangemix.vae.pt
: Anime VAE (SD1.5)
sdxl_vae.safetensors
: Vanilla SDXL VAE
vae-ft-mse-840000-ema-pruned.safetensors
: Smoother output VAE (SD1.5)
taesd_decoder.pth
: Preview decoder for SD1.x
taesd_encoder.pth
: Preview encoder for SD1.x
taesdxl_decoder.pth
: Preview decoder for SDXL
taesdxl_encoder.pth
: Preview encoder for SDXL
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
: ViT-H model, compatible with all IP-Adapter models
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
: ViT-G model (referenced in IP-Adapters)
clip-vit-large-patch14.bin
: ViT-L model (needed for styles model)
clip_vision_g.safetensors
: Used for ReVision control LoRA model
mobile_sam.pt
: MobileSAM
sam_vit_b_01ec64.pth
: Segment Anything (small)
sam_vit_h_4b8939.pth
: Segment Anything (large)
sam_vit_l_0b3195.pth
: Segment Anything (medium)
bad_prompt_version2-neg.pt
: Negative prompt embedding for improving hands (SD1.5)
easynegative.safetensors
: General negative prompt embedding (SD1.5)
negative_hand-neg.pt
: Negative prompt embedding for hands (SD1.5)
ng_deepnegative_v1_75t.pt
: General negative prompt embedding (SD1.5)
gligen_sd14_textbox_pruned_fp16.safetensors
: Enables workflows with annotated bounding boxes (SD1.5)
GroundingDINO_SwinB.cfg.py
: Configuration file
GroundingDINO_SwinT_OGC.cfg.py
: Configuration file for SwinT OGC
groundingdino_swinb_cogcoor.pth
: GroundingDINO SwinB model
groundingdino_swint_ogc.pth
: GroundingDINO SwinT OGC model
u2net.onnx
: Background removal model
u2net_human_seg.onnx
: Human segmentation model for background removal
face_yolov8m.pt
: YOLO face detector
hand_yolov8s.pt
: YOLO hand detector
person_yolov8m-seg.pt
: YOLO person segmentation model
Various models for generating and controlling animated outputs, including LongAnimateDiff and motion modules
Models for applying layer diffusion to base models, including attention and convolution-based models for both SD1.5 and SDXL
Some models may have multiple versions or variants available.
Please refer to the specific model's documentation for usage instructions and compatibility information.
This list is subject to updates as new models are added or existing ones are modified.
This list contains all the custom nodes installed in our ComfyUI instance. Each entry includes the GitHub repository reference and the specific commit used.
Reference: https://github.com/comfyanonymous/ComfyUI
Commit: 45ec1cbe963055798765645c4f727122a7d3e35e
Reference: https://github.com/Fannovel16/comfyui_controlnet_aux
Commit: c0b33402d9cfdc01c4e0984c26e5aadfae948e05
Reference: https://github.com/ssitu/ComfyUI_UltimateSDUpscale
Commit: b303386bd363df16ad6706a13b3b47a1c2a1ea49
Reference: https://github.com/FizzleDorf/ComfyUI_FizzNodes
Commit: fd2165162ed939d3c23ab6c63f206ae93457aad8
Reference: https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
Commit: e369cac458f977fab0ee5719ce5e4057dc04729f
Reference: https://github.com/ltdrdata/ComfyUI-Inspire-Pack
Commit: 985f6a239b1aed0c67158f64bf579875ec292cb2
Reference: https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
Commit: 5e11679995c68f33891c306a393915feefe234b5
Reference: https://github.com/evanspearman/ComfyMath
Commit: be9beab9923ccf5c5e4132dc1653bcdfa773ed70
Reference: https://github.com/shiimizu/ComfyUI_smZNodes
Commit: 378ed4567f3290823d5dc5e9556c7d742dc82d23
Reference: https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet
Commit: 33d9884b76e8d7a2024691c5d98308e7e61bf38d
Reference: https://github.com/storyicon/comfyui_segment_anything
Commit: ab6395596399d5048639cdab7e44ec9fae857a93
Reference: https://github.com/ltdrdata/ComfyUI-Impact-Pack
Commit: 971c4a37aa4e77346eaf0ab80adf3972f430bec1
Reference: https://github.com/WASasquatch/was-node-suite-comfyui
Commit: 6c3fed70655b737dc9b59da1cadb3c373c08d8ed
Reference: https://github.com/cubiq/ComfyUI_essentials
Commit: bd9b89b7c924302e14bb353b87c3373af447bf55
Reference: https://github.com/kijai/ComfyUI-KJNodes
Commit: d25604536e88b42459cf7ead9a1306271ed7fe6f
Reference: https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes
Commit: d78b780ae43fcf8c6b7c6505e6ffb4584281ceca
Reference: https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
Commit: f9e0343f4c4606ee6365a9af4a7e16118f1c45e1
Reference: https://github.com/cubiq/ComfyUI_IPAdapter_plus
Commit: 0d0a7b3693baf8903fe2028ff218b557d619a93d
Reference: https://github.com/glifxyz/ComfyUI-GlifNodes
Commit: 5d7e5c80aa175fb6ac860a6d63a15d1f699023fc
Reference: https://github.com/kijai/ComfyUI-moondream
Commit: b97ad4718821d7cee5eacce139c94c9de51268b8
Reference: https://github.com/huchenlei/ComfyUI-layerdiffuse
Commit: 151f7460bbc9d7437d4f0010f21f80178f7a84a6
Reference: https://github.com/rgthree/rgthree-comfy
Commit: db062961ed4a3cd92f4eb2b8eeedbcc742b5d5e9
Reference: https://github.com/kijai/ComfyUI-SUPIR
Commit: 656e55e8154a2cdfa0738a3474c8aa8e02113e66
Author: kaijai
Description: SUPIR superresolution
Please refer to the respective GitHub repositories for detailed installation and usage instructions.
This list is subject to updates as new nodes are added or existing ones are modified.
Enable developer options:
Go to settings:
Enable Enable Dev mode Options
:
Export the API graph:
This is a graph that should run via our Comfy Block. Mind the cloudinary link for the image field.
{
"3": {
"inputs": {
"seed": 624586032019704,
"steps": 4,
"cfg": 1,
"sampler_name": "lcm",
"scheduler": "normal",
"denoise": 1,
"model": [
"13",
0
],
"positive": [
"6",
0
],
"negative": [
"7",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "KSampler"
},
"4": {
"inputs": {
"ckpt_name": "sd_xl_base_1.0.safetensors"
},
"class_type": "CheckpointLoaderSimple"
},
"5": {
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
},
"class_type": "EmptyLatentImage"
},
"6": {
"inputs": {
"text": "man in space",
"clip": [
"10",
1
]
},
"class_type": "CLIPTextEncode"
},
"7": {
"inputs": {
"text": "text, watermark",
"clip": [
"10",
1
]
},
"class_type": "CLIPTextEncode"
},
"8": {
"inputs": {
"samples": [
"3",
0
],
"vae": [
"4",
2
]
},
"class_type": "VAEDecode"
},
"9": {
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
},
"class_type": "SaveImage"
},
"10": {
"inputs": {
"lora_name": "lcm-lora-sdxl.safetensors",
"strength_model": 1,
"strength_clip": 1,
"model": [
"4",
0
],
"clip": [
"4",
1
]
},
"class_type": "LoraLoader"
},
"12": {
"inputs": {
"ipadapter_file": "ip-adapter-plus_sdxl_vit-h.safetensors"
},
"class_type": "IPAdapterModelLoader"
},
"13": {
"inputs": {
"weight": 0.3,
"noise": 0,
"weight_type": "original",
"start_at": 0,
"end_at": 1,
"unfold_batch": false,
"ipadapter": [
"12",
0
],
"clip_vision": [
"15",
0
],
"image": [
"16",
0
],
"model": [
"10",
0
]
},
"class_type": "IPAdapterApply"
},
"15": {
"inputs": {
"clip_name": "vit-h-image-encoder.safetensors"
},
"class_type": "CLIPVisionLoader"
},
"16": {
"inputs": {
"image": "https://res.cloudinary.com/dzkwltgyd/image/upload/v1699969928/image-input-block-production/rcvznyr9hhewaf9tdnts.jpg",
"choose file to upload": "image"
},
"class_type": "LoadImage"
}
}
This is a basic AnimateDiff (original repo) workflow based on SD15:
From here on, you could:
Add motion LoRAs to control the motion
Use an upscaler to make it higher res
Use LCM LoRA to make things faster
Read how the context options might work: link
This workflow uses SDXL to create a base image and then the UltimateSD upscale block. The UltimateSD upscale block works best with a tile controlnet. Therefore, we load in a SD15 checkpoint.