🥞
Glif Docs/Guide
  • Getting Started
    • 👋What is Glif?
    • 💡What can I do with a glif?
      • 🏃Run a glif
      • 🔌Build a glif
      • 🔀Remix a glif
      • 🗣️Comment on a glif
      • 🔳Embed a glif
    • ⚒️How do I build a glif?
      • 📽️Video tutorial: Building a simple image generator
      • 🟰Using variables
    • ⚙️Profile Settings
    • 🪙Credits and Payments
    • ❓FAQs
  • Blocks
    • 🙋Inputs
      • ✍️Text Input Block
      • 🖼️Image Input Block
      • 📋Multipick Block
    • 🪄Generators
      • 📃Text Block
      • 🖼️Image Block
      • ➡️Image to Text Block
        • Florence2Sam2Segmenter
    • 🧰Tools
      • 🔀Text Combiner Block
      • 🔬JSON Extractor Block
    • 💅Styling
      • 🎨HTML Block
      • 🖼️Canvas Block
    • 🧑‍🔬Advanced/Experimental
      • 🎙️Audio Input Block
      • ↔️Glif Block
      • 🌐Web Fetcher Block
      • 🔊Audio Spell
      • 🧱ComfyUI Block
      • 📡Audio to Text Block
      • 🎥Video Input Block
      • 🔧JSON Repair Block
  • Apps
    • 🎨Glif It! Browser Extension
  • Glif University
    • 🎥Video Tutorials
      • 🐲How To: D&D Character Sheet Generator
      • 🧠How To: Expanding Brain Meme Generator
      • 🦑How To: Occult Memelord Generator
      • 🥸How To: InstantID Portrait Restyle Glif
      • 🕺How To: Style and Pose a Character with InstantID + Controlnet
      • 😱How To: Create a Simple Cartoon Portrait Animation Glif (LivePortrait + Custom Blocks)
      • 👗How to Create a Clothing Restyler App (IP Adapter, ControlNet + GPT Vision)
      • 🤡How to Create a 4+ Panel Storyboard/Comic (Flux Schnell)
      • 🎂How to Create a Recipe Generator with Accompanying Pictures
      • How to Use JasperAI Depth Controlnet on Flux Dev
      • 🦸‍♂️How to Make a Consistent Comic Panel Generator
    • 🧑‍🏫Prompt Engineering 101
    • 🖼️ControlNet
    • 📚AI Glossary
  • API - for Developers
    • ⚡Running glifs via the API
    • 🤖Using AI Assistants to build with the Glif API
    • 📙Reading & writing data via the API
    • 🗾Glif Graph JSON Schema
    • 📫Embed player & custom webpages
    • 📫Sample code
    • ❓What can I make with the Glif API?
      • Browser Extensions
      • Discord Bots
      • Games
      • Social Media Bots
      • Experimental Projects
  • Policies
    • 👨‍👩‍👧‍👦Community Guidelines
  • Programs
    • 🖼️Loradex Trainer Program
  • Community Resources
    • 🧑‍🤝‍🧑Resources Created by Glif Community Members
  • Contact Us
    • 📣Send us your feedback
    • 🚔Information for law enforcement
Powered by GitBook
On this page
  • Examples
  • How to use ControlNet
  • Conditioning strength
  • Canny upper & Canny lower
  1. Glif University

ControlNet

Guiding image generation with another image

PreviousPrompt Engineering 101NextAI Glossary

Last updated 11 months ago

With , you can guide the image generation process with another image. This is a great way to produce images with a consistent visual layout.

The standard ControlNet used at glif is controlnet-canny. This is a ControlNet variant that first tries to find edges in the reference image, and then uses those edges as guidance.

Note that not all Glif image models and APIs support ControlNet - please see How to use ControlNet on how to activate it.

Examples

Based on this reference image:

Based on this reference image:

How to use ControlNet

Turn it on inside the advanced section of the image block:

Then put a URL to your reference image:

Make sure your URL resolves to (preferably) a .jpg or .png image. Please note that Imgur links might not resolve.

Conditioning strength

Below are a couple of images where the strength of the Conditioning parameter is increased for the prompt an illustration of a cyborg.

The first image (strength=0.0) is always the control image. If almost no control scale is applied (0.1), we get a base cyborg illustration. Moving to the right, the control image guides the image generation more and more.

Canny upper & Canny lower

canny upper and canny lower refer to parameters of the Canny edge-detection algorithm used inside the model.

  • Canny Upper: Pixels with intensity gradients above this value are considered strong edges.

  • Canny Lower: Pixels below this value are discarded. Those in between are considered weak edges and are kept only if connected to strong edges.

Here's an image to illustrate what happens:

  • by moving to the right, we are increasing canny_upper, so less pixels are considered strong pixels (pixels that have a strong gradient). In the extreme case, only the edge of the ball is considered a strong gradient. (It is an abrupt change in pixel values.)

  • moving down, we are increasing canny_lower and here more and more pixels with weak gradients fall off.

  • in the upper left quadrant the most interesting stuff is happening: with certain mixes of lower and upper, we get a mix of strong pixels (strong gradients) and weak pixels (weak gradients) that get connected .

Tip: Move both canny upper and canny lower to a certain amount, e.g. 150, until only the most salient edges remain, then move canny lower down until you get sufficient detail.

by @ansipedantic

by @fabian

🖼️
Baseball Card Me
Oppenheimer Yourself
ControlNet
SDXL k_euler_a sampler.
SDXL unipc sampler.
SD21 k_euler_a sampler.
SD21 unipc sampler.
Variation grid of the canny upper and lower thresholds.
controlnet canny, via
https://github.com/lllyasviel/ControlNet