๐ŸงฑComfyUI Block

ComfyUI is an extremely powerful (Stable Diffusion) workflow builder. It has a node-based GUI and is for advanced users.

Where to find it in the builder

Sample

There is a new button to edit the workflow in a preview!

You can drag your (API) graph .json in here and run it!

Graph checks

  • โœ… you need to have one SaveImage or VHS_VideoCombine node, the engine will search for this and set it as the last_node_id.

  • โœ… all loadImage image fields need to be valid image links (jpg and png preferred). You can get these by uploading an image with the image uploading block. Downloading videos is not yet supported.

Compatible models

Expand to see all models and checkpoints
grounding-dino
- GroundingDINO_SwinB.cfg.py 
- GroundingDINO_SwinT_OGC.cfg.py base=DINO description=GroundingDINO SwinT OGC CFG File
- groundingdino_swinb_cogcoor.pth โš ๏ธ -> use "GroundingDINO_SwinB (938MB)" 
- groundingdino_swint_ogc.pth โš ๏ธ -> use " GroundingDINO_SwinT_OGC (694MB)" base=DINO description=GroundingDINO SwinT OGC Model
rembg
- u2net.onnx 
- u2net_human_seg.onnx 
sams
- sam_vit_b_01ec64.pth โš ๏ธ -> use "sam_vit_b (375MB)" base=SAM description=segment anything small
ultralytics
- bbox
- - face_yolov8m.pt base=Ultralytics description=yolo face detector
- - hand_yolov8s.pt base=Ultralytics description=yolo hand detector
- segm
- - person_yolov8m-seg.pt base=Ultralytics description=yolo person segmentation model
/root/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models
-lt_long_mm_16_64_frames.ckpt base=SD1.x description=LongAnimateDiff model, capable of generating videos with frame counts ranging from 16 to 64. For optimal results, we recommend using a motion scale of 1.28
-lt_long_mm_16_64_frames_v1.1.ckpt base=SD1.x description=LongAnimateDiff model, capable of generating videos with frame counts ranging from 16 to 64. For optimal results, we recommend using a motion scale of 1.28
-lt_long_mm_32_frames.ckpt base=SD1.x description=specialized model designed to generate 32-frame videos. This model typically produces higher quality videos compared to the LongAnimateDiff model supporting 16-64 frames. For better results, use a motion scale of 1.15
-mm-Stabilized_high.pth base=SD1.x description=much more stable than the base model, but at the cost of having much less movement
-mm-Stabilized_mid.pth base=SD1.x description=a bit more stable than the base model
-mm_sd_v14.ckpt base=SD1.x description=motion module for animatediff
-mm_sd_v15.ckpt base=SD1.x description=motion module for animatediff
-mm_sd_v15_v2.ckpt base=SD1.x description=v2 motion module for animatediff
-mm_sdxl_v10_beta.ckpt base=SDXL description=animatediff model for sdxl
-temporaldiff-v1-animatediff.ckpt base=SD1.x description=TemporalDiff is a finetune of the original AnimateDiff weights on a higher resolution dataset
-v3_sd15_mm.ckpt base=SD1.x description=v3 motion module for animatediff 
/root/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/motion_lora
-.gitkeep 
-v2_lora_PanLeft.ckpt base=SD1.x description=lora to create a pan left motion effect
-v2_lora_PanRight.ckpt base=SD1.x description=lora to create a pan right motion effect
-v2_lora_RollingAnticlockwise.ckpt base=SD1.x description=lora to create a rolling anticlockwise motion effect
-v2_lora_RollingClockwise.ckpt base=SD1.x description=lora to create a rolling clockwise motion effect
-v2_lora_TiltDown.ckpt base=SD1.x description=lora to create a tilt down motion effect
-v2_lora_TiltUp.ckpt base=SD1.x description=lora to create a tilt up motion effect
-v2_lora_ZoomIn.ckpt base=SD1.x description=lora to create a zoom in motion effect
-v2_lora_ZoomOut.ckpt base=SD1.x description=lora to create a zoom out motion effect
/root/ComfyUI/custom_nodes/ComfyUI-moondream/checkpoints
-moondream2
-- .gitattributes 
-- README.md 
-- added_tokens.json 
-- config.json 
-- configuration_moondream.py 
-- generation_config.json 
-- merges.txt 
-- model.safetensors base=Stable Cascade description=[1.39GB] Stable Cascade: text_encoder
-- modeling_phi.py 
-- moondream.py 
-- moondream2-mmproj-f16.gguf 
-- moondream2-text-model-f16.gguf 
-- special_tokens_map.json 
-- tokenizer.json 
-- tokenizer_config.json 
-- versions.txt 
-- vision_encoder.py 
-- vocab.json 
/root/comfy_volume
checkpoints
- DreamShaper8_LCM.safetensors base=SD1.5 description=A fast universal model with LCM built in
- Realistic_Vision_V5.1_fp16-no-ema.safetensors base=SD1.5 description=a photorealistic 1.5 model
- SUPIR-v0Q.ckpt base=SDXL description=SUPIR superresolution Q model
- albedobaseXL_v12.safetensors base=SDXL description=a universally great SDXL model
- arthemyObjects_v10.safetensors base=SD1.5 description=This model has been created with the intend of making a general object-based model, which means that realistic human shapes are not the in the scope of this merge.
- fabricated_reality_sdxl_v14.safetensors base=SDXL description=This is an realistic SDXL model I have been finetuning, merging, and tweaking since SDXL came out. I have done a ton of target guidance training to fix eyes, teeth, and hands, as well as many other aspects.
- sd_xl_base_1.0.safetensors base=SDXL description=vanilla SDXL model
- sd_xl_refiner_1.0.safetensors base=SDXL description=SDXL refiner, dont use as base model, only to refine
- v1-5-pruned-emaonly.ckpt base=SD1.5 description=vanilla SD1.5
clip_vision
- CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors base=ViT-H description=clip vit-h, compatible with all ip-adapter models
- CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors base=ViT-G description=clip git-g, referenced in ipadapters but not used
- clip-vit-large-patch14.bin base=ViT-L description=[1.7GB] CLIPVision model (needed for styles model)
- clip_vision_g.safetensors base=vit-g description=used for ReVision control lora model
controlnet
- OpenPoseXL2.safetensors base=SDXL description=ControlNet openpose model for SDXL
- TTPLANET_Controlnet_Tile_realistic_v2_fp16.safetensors 
- control-lora-canny-rank128.safetensors base=SDXL description=trained with canny edge detection, white edges on a black background conditioning
- control-lora-canny-rank256.safetensors base=SDXL description=trained with canny edge detection, white edges on a black background conditioning
- control-lora-depth-rank128.safetensors base=SDXL description=trained with depth estimation, grayscale depth conditioning
- control-lora-depth-rank256.safetensors base=SDXL description=trained with depth estimation, grayscale depth conditioning
- control-lora-recolor-rank128.safetensors base=SDXL description=designed to colorize black and white photographs
- control-lora-recolor-rank256.safetensors base=SDXL description=designed to colorize black and white photographs
- control-lora-sketch-rank128-metadata.safetensors base=SDXL description=designed to color in drawings input as a white-on-black image
- control-lora-sketch-rank256.safetensors base=SDXL description=designed to color in drawings input as a white-on-black image
- control_boxdepth_LooseControlfp16.safetensors base=SD1.5 description=Loose ControlNet model
- control_sd15_inpaint_depth_hand_fp16.safetensors base=SD1.5 description=trained with depth estimation for hands, grayscale depth conditioning
- control_v11e_sd15_ip2p_fp16.safetensors base=SD1.5 description=trained with pixel to pixel instruction, plain conditioning
- control_v11e_sd15_shuffle_fp16.safetensors base=SD1.5 description=trained with image shuffling, mage with shuffled patches or regions as conditioning
- control_v11f1e_sd15_tile_fp16.safetensors base=SD1.5 description=trained with image tiling, blurry image or part of an image as conditioning
- control_v11f1p_sd15_depth_fp16.safetensors base=SD1.5 description=trained with depth estimation, grayscale depth conditioning
- control_v11p_sd15_canny_fp16.safetensors base=SD1.5 description=trained with canny edge detection, white edges on a black background conditioning
- control_v11p_sd15_inpaint_fp16.safetensors base=SD1.5 description=trained with image inpainting, plain conditioning
- control_v11p_sd15_lineart_fp16.safetensors base=SD1.5 description=trained with lineart,
- control_v11p_sd15_mlsd_fp16.safetensors base=SD1.5 description=trained with multi-level line segment detection, annotated line segments as conditioning
- control_v11p_sd15_normalbae_fp16.safetensors base=SD1.5 description=trained with surface normal estimation, image with surface normal information, usually represented as a color-coded image
- control_v11p_sd15_openpose_fp16.safetensors base=SD1.5 description=trained with human pose estimation, image with human poses, usually represented as a set of keypoints or skeletons as conditioning
- control_v11p_sd15_scribble_fp16.safetensors base=SD1.5 description=trained with scribble-based image generation, image with scribbles, usually random or user-drawn strokes
- control_v11p_sd15_seg_fp16.safetensors base=SD1.5 description=trained with segmentation, color-coded segments conditioning
- control_v11p_sd15_softedge_fp16.safetensors base=SD1.5 description=trained with soft edge image generation, soft edges usually to create a more painterly or artistic effect as conditioning
- control_v11p_sd15s2_lineart_anime_fp16.safetensors base=SD1.5 description=trained with anime line art generation, image with anime-style line art as conditioning
- control_v11u_sd15_tile_fp16.safetensors base=SD1.5 description=Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints (tile) / v11u
- control_v1p_sd15_qrcode_monster.safetensors base=SD1.5 description=This model is made to generate creative QR codes that still scan.
- control_v1p_sdxl_qrcode_monster.safetensors base=SDXL description=This model is made to generate creative QR codes that still scan. Illusions should also work well.
- controlnet-sd-xl-1.0-softedge-dexined.safetensors base=SDXL description=trained with dexined soft edge preprocessing, white lines on black
- depth-zoe-xl-v1.0-controlnet.safetensors base=SDXL description=trained with depth estimation, grayscale depth conditioning
- temporalnetversion2.ckpt 
- temporalnetversion2.safetensors base=SD1.5 description=TemporalNet was a ControlNet model designed to enhance the temporal consistency of generated outputs
embeddings
- bad_prompt_version2-neg.pt base=SD1.5 description=negative prompt embedding, useful for improving hands
- easynegative.safetensors base=SD1.5 description=general negative prompt embedding
- negative_hand-neg.pt base=SD1.5 description=negative prompt embedding specifically for hands
- ng_deepnegative_v1_75t.pt base=SD1.5 description=general negative prompt embedding
gligen
- gligen_sd14_textbox_pruned_fp16.safetensors base=SD1.5 description=enables creating a workflow with annotated bboxes to decide where goes what in an image
ipadapter
- ip-adapter-full-face_sd15.safetensors base=SD1.5 description=updated version of ip-adapter-face
- ip-adapter-plus-face_sd15.safetensors base=SD1.5 description=same as ip-adapter-plus_sd15, but use cropped face image as condition
- ip-adapter-plus-face_sdxl_vit-h.safetensors base=SDXL description=use patch image embeddings from OpenCLIP-ViT-H-14 as condition, closer to the reference image than ip-adapter_xl and ip-adapter_sdxl_vit-h
- ip-adapter-plus_sd15.safetensors base=SD1.5 description=use patch image embeddings from OpenCLIP-ViT-H-14 as condition, closer to the reference image than ip-adapter_sd15
- ip-adapter-plus_sdxl_vit-h.safetensors base=SDXL description=use patch image embeddings from OpenCLIP-ViT-H-14 as condition, closer to the reference image than ip-adapter_xl and ip-adapter_sdxl_vit-h
- ip-adapter_sd15.safetensors base=SD1.5 description=use global image embedding from OpenCLIP-ViT-H-14 as condition
- ip-adapter_sd15_light.safetensors base=SD1.5 description=same as ip-adapter_sd15, but more compatible with text prompt
- ip-adapter_sd15_vit-G.safetensors base=SD1.5 description=same as ip-adapter_sd15, buy uses OpenCLIP-ViT-bigG-14
- ip-adapter_sdxl.safetensors base=SDXL description=use global image embedding from OpenCLIP-ViT-bigG-14 as condition
- ip-adapter_sdxl_vit-h.safetensors base=SDXL description=same as ip-adapter_sdxl, but use OpenCLIP-ViT-H-14
layer_model
- layer_sd15_transparent_attn.safetensors base=SD1.5 description=layer diffusion model to apply to base model using attn
- layer_sd15_vae_transparent_decoder.safetensors base=SD1.5 description=decoder used for layer diffusion
- layer_xl_transparent_attn.safetensors base=SDXL description=layer diffusion model to apply to base model using attn
- layer_xl_transparent_conv.safetensors base=SDXL description=layer diffusion model to apply to base model using conv
- vae_transparent_decoder.safetensors base=SDXL description=decoder used for layer diffusion
loras
- PS1Redmond-PS1Game-Playstation1Graphics.safetensors base=SDXL description=This Lora applies a Playstation 1 style
- PixelArtRedmond-Lite64.safetensors base=SDXL description=Lora to create pixel art, tag for the model: Pixel Art, PixArFK
- StickersRedmond.safetensors base=SDXL description=Lora to create stickers. Tag for the model: Stickers, Sticker
- add-detail-xl.safetensors base=SDXL description=Detail tweaker for SDXL.
- add_detail.safetensors base=SD1.5 description=LoRA for enhancing detail while keeping the overall style/character
- cereal_box_sdxl_v1.safetensors base=SDXL description=LoRA to create cereal box covers. No trigger word needed.
- lcm-lora-sdv1-5.safetensors base=SD1.5 description=lora for applying LCM deltas which allow less sampling steps
- lcm-lora-sdxl.safetensors base=SDXL description=lora for applying LCM deltas which allow less sampling steps
- manny_lora_koyha_800_1024.safetensors 
- more_details.safetensors base=SD1.5 description=lora for more details, sharpening and slightly better composition
- sd_xl_offset_example-lora_1.0.safetensors base=SDXL description=Stable Diffusion XL offset LoRA to improve SNR
- sdxl_lightning_2step_lora.safetensors base=SDXL description=SDXL-Lightning is a lightning-fast text-to-image generation model.
- sdxl_lightning_4step_lora.safetensors base=SDXL description=SDXL-Lightning is a lightning-fast text-to-image generation model.
- sdxl_lightning_8step_lora.safetensors base=SDXL description=SDXL-Lightning is a lightning-fast text-to-image generation model.
- theovercomer8sContrastFix_sd15.safetensors base=SD1.5 description=lora for less overblown and bright images for sd15
- theovercomer8sContrastFix_sd21768.safetensors base=SD2.1 description=lora for less overblown and bright images for sd21
sams
- mobile_sam.pt base=SAM description=MobileSAM
- sam_vit_b_01ec64.pth โš ๏ธ -> use "sam_vit_b (375MB)" base=SAM description=segment anything small
- sam_vit_h_4b8939.pth โš ๏ธ -> use "sam_vit_h (2.56GB)" base=SAM description=segment anything large
- sam_vit_l_0b3195.pth โš ๏ธ -> use "sam_vit_l (1.25GB)" base=SAM description=segment anything medium
upscale_models
- 4x_NMKD-Siax_200k.pth base=upscale description=general upscaler
- ESRGAN_4x.pth base=upscale description=general upscaler
- RealESRGAN_x2.pth base=upscale description=general upscaler
- RealESRGAN_x4.pth base=upscale description=general upscaler
vae
- kl-f8-anime2.ckpt base=SD2.1 VAE description=anime vae
- orangemix.vae.pt base=SD1.5 VAE description=anime vae
- sdxl_vae.safetensors base=SDXL VAE description=vanilla sdxl vae
- vae-ft-mse-840000-ema-pruned.safetensors base=SD1.5 VAE description=resumed from ft-EMA and uses EMA weights and was trained for another 280k steps using a different loss, with more emphasis on MSE reconstruction (MSE + 0.1 * LPIPS). It produces somewhat smoother outputs.
vae_approx
- taesd_decoder.pth base=SD1.x description=To view the preview in high quality while running samples in ComfyUI, you will need this model.
- taesd_encoder.pth base=SD1.x description=To view the preview in high quality while running samples in ComfyUI, you will need this model.
- taesdxl_decoder.pth base=SDXL description=(SDXL Verison) To view the preview in high quality while running samples in ComfyUI, you will need this model.
- taesdxl_encoder.pth base=SDXL description=(SDXL Verison) To view the preview in high quality while running samples in ComfyUI, you will need this model.

Plugins

Expand to see all plugins
[
  {
    "reference": "https://github.com/comfyanonymous/ComfyUI",
    "commit": "45ec1cbe963055798765645c4f727122a7d3e35e",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/Fannovel16/comfyui_controlnet_aux",
    "commit": "c0b33402d9cfdc01c4e0984c26e5aadfae948e05",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/ssitu/ComfyUI_UltimateSDUpscale",
    "commit": "b303386bd363df16ad6706a13b3b47a1c2a1ea49",
    "install_file": null
  },
  {
    "reference": "https://github.com/FizzleDorf/ComfyUI_FizzNodes",
    "commit": "fd2165162ed939d3c23ab6c63f206ae93457aad8",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite",
    "commit": "e369cac458f977fab0ee5719ce5e4057dc04729f",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/ltdrdata/ComfyUI-Inspire-Pack",
    "commit": "985f6a239b1aed0c67158f64bf579875ec292cb2",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/Fannovel16/ComfyUI-Frame-Interpolation",
    "commit": "5e11679995c68f33891c306a393915feefe234b5",
    "install_file": "install.py"
  },
  {
    "reference": "https://github.com/evanspearman/ComfyMath",
    "commit": "be9beab9923ccf5c5e4132dc1653bcdfa773ed70",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/shiimizu/ComfyUI_smZNodes",
    "commit": "378ed4567f3290823d5dc5e9556c7d742dc82d23",
    "install_file": null
  },
  {
    "reference": "https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet",
    "commit": "33d9884b76e8d7a2024691c5d98308e7e61bf38d",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/storyicon/comfyui_segment_anything",
    "commit": "ab6395596399d5048639cdab7e44ec9fae857a93",
    "install_file": "install.py"
  },
  {
    "reference": "https://github.com/ltdrdata/ComfyUI-Impact-Pack",
    "commit": "971c4a37aa4e77346eaf0ab80adf3972f430bec1",
    "install_file": "install.py"
  },
  {
    "reference": "https://github.com/WASasquatch/was-node-suite-comfyui",
    "commit": "6c3fed70655b737dc9b59da1cadb3c373c08d8ed",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/cubiq/ComfyUI_essentials",
    "commit": "bd9b89b7c924302e14bb353b87c3373af447bf55",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/kijai/ComfyUI-KJNodes",
    "commit": "d25604536e88b42459cf7ead9a1306271ed7fe6f",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes",
    "commit": "d78b780ae43fcf8c6b7c6505e6ffb4584281ceca",
    "install_file": null
  },
  {
    "reference": "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved",
    "commit": "f9e0343f4c4606ee6365a9af4a7e16118f1c45e1",
    "install_file": null
  },
  {
    "reference": "https://github.com/cubiq/ComfyUI_IPAdapter_plus",
    "commit": "0d0a7b3693baf8903fe2028ff218b557d619a93d",
    "install_file": null
  },
  {
    "reference": "https://github.com/glifxyz/ComfyUI-GlifNodes",
    "commit": "5d7e5c80aa175fb6ac860a6d63a15d1f699023fc",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/kijai/ComfyUI-moondream",
    "commit": "b97ad4718821d7cee5eacce139c94c9de51268b8",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/huchenlei/ComfyUI-layerdiffuse",
    "commit": "151f7460bbc9d7437d4f0010f21f80178f7a84a6",
    "install_file": "requirements.txt"
  },
  {
    "reference": "https://github.com/rgthree/rgthree-comfy",
    "commit": "db062961ed4a3cd92f4eb2b8eeedbcc742b5d5e9",
    "install_file": null
  },
  {
    "reference": "https://github.com/kijai/ComfyUI-SUPIR",
    "commit": "656e55e8154a2cdfa0738a3474c8aa8e02113e66",
    "install_file": "requirements.txt",
    "author": "kaijai",
    "title": "ComfyUI-SUPIR",
    "files": [
      "https://github.com/kijai/ComfyUI-SUPIR"
    ],
    "install_type": "git-clone",
    "description": "SUPIR superresolution"
  }
]

Exporting the graph

  1. Enable developer options:

    1. Go to settings:

    2. Enable Enable Dev mode Options:

  2. Export the API graph:

Graph example

This is a graph that should run via our Comfy Block. Mind the cloudinary link for the image field.

expand to show json
{
  "3": {
    "inputs": {
      "seed": 624586032019704,
      "steps": 4,
      "cfg": 1,
      "sampler_name": "lcm",
      "scheduler": "normal",
      "denoise": 1,
      "model": [
        "13",
        0
      ],
      "positive": [
        "6",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "5",
        0
      ]
    },
    "class_type": "KSampler"
  },
  "4": {
    "inputs": {
      "ckpt_name": "sd_xl_base_1.0.safetensors"
    },
    "class_type": "CheckpointLoaderSimple"
  },
  "5": {
    "inputs": {
      "width": 1024,
      "height": 1024,
      "batch_size": 1
    },
    "class_type": "EmptyLatentImage"
  },
  "6": {
    "inputs": {
      "text": "man in space",
      "clip": [
        "10",
        1
      ]
    },
    "class_type": "CLIPTextEncode"
  },
  "7": {
    "inputs": {
      "text": "text, watermark",
      "clip": [
        "10",
        1
      ]
    },
    "class_type": "CLIPTextEncode"
  },
  "8": {
    "inputs": {
      "samples": [
        "3",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode"
  },
  "9": {
    "inputs": {
      "filename_prefix": "ComfyUI",
      "images": [
        "8",
        0
      ]
    },
    "class_type": "SaveImage"
  },
  "10": {
    "inputs": {
      "lora_name": "lcm-lora-sdxl.safetensors",
      "strength_model": 1,
      "strength_clip": 1,
      "model": [
        "4",
        0
      ],
      "clip": [
        "4",
        1
      ]
    },
    "class_type": "LoraLoader"
  },
  "12": {
    "inputs": {
      "ipadapter_file": "ip-adapter-plus_sdxl_vit-h.safetensors"
    },
    "class_type": "IPAdapterModelLoader"
  },
  "13": {
    "inputs": {
      "weight": 0.3,
      "noise": 0,
      "weight_type": "original",
      "start_at": 0,
      "end_at": 1,
      "unfold_batch": false,
      "ipadapter": [
        "12",
        0
      ],
      "clip_vision": [
        "15",
        0
      ],
      "image": [
        "16",
        0
      ],
      "model": [
        "10",
        0
      ]
    },
    "class_type": "IPAdapterApply"
  },
  "15": {
    "inputs": {
      "clip_name": "vit-h-image-encoder.safetensors"
    },
    "class_type": "CLIPVisionLoader"
  },
  "16": {
    "inputs": {
      "image": "https://res.cloudinary.com/dzkwltgyd/image/upload/v1699969928/image-input-block-production/rcvznyr9hhewaf9tdnts.jpg",
      "choose file to upload": "image"
    },
    "class_type": "LoadImage"
  }
}

Example workflows

AnimateDiff

This is a basic AnimateDiff (original repo) workflow based on SD15:

GIF outputWorkflow image (drop this in Comfy)

From here on, you could:

  • Add motion LoRAs to control the motion

  • Use an upscaler to make it higher res

  • Use LCM LoRA to make things faster

  • Read how the context options might work: link

IPAdapter with TileControlnet

Txt2Image with SDXL + Upscaler

This workflow uses SDXL to create a base image and then the UltimateSD upscale block. The UltimateSD upscale block works best with a tile controlnet. Therefore, we load in a SD15 checkpoint.

SDXL output (1K)Upscaled (2K)

Last updated