Comment on page
🖼

Controlnet

Guiding image generation with another image
With controlnet, you can guide the image generation process with another image. This is a great way to produce images with a consistent visual layout.
The standard controlnet used at glif is controlnet-canny. This is a controlnet variant that first tries to find edges in the reference image, and then uses those edges as guidance.
Note that not all Glif image models and APIs support controlnet - please see How to use controlnet on how to activate it.

Examples

Baseball Card Me by @ansipedantic
Based on this reference image:
Oppenheimer Yourself by @fabian
Based on this reference image:

How to use controlnet

Turn it on inside the Advanced section of the Image Spell:
Then put a URL to your reference image:
Make sure your URL resolves to (preferably) a .jpg or .png image. Please note that imgur links might not resolve.

Conditioning strength

Below are a couple of images where the strength of the Conditioning parameter is increased for the prompt an illustration of a cyborg.
The first image (strength=0.0) is always the control image. If almost no control scale is applied (0.1), we get a base cyborg illustration. Moving to the right, the control image guides the image generation more and more.
SDXL k_euler_a sampler.
SDXL unipc sampler.
SD21 k_euler_a sampler.
SD21 unipc sampler.

Canny upper & Canny lower

canny upper and canny lower refer to parameters of the Canny edge-detection algorithm used inside the model.
  • Canny Upper: Pixels with intensity gradients above this value are considered strong edges.
  • Canny Lower: Pixels below this value are discarded. Those in between are considered weak edges and are kept only if connected to strong edges.
Here's an image to illustrate what happens:
Variation grid of the canny upper and lower thresholds.
  • by moving to the right, we are increasing canny_upper, so less pixels are considered strong pixels (pixels that have a strong gradient). In the extreme case, only the edge of the ball is considered a strong gradient. (It is an abrupt change in pixel values.)
  • moving down, we are increasing canny_lower and here more and more pixels with weak gradients fall off.
  • in the upper left quadrant the most interesting stuff is happening: with certain mixes of lower and upper, we get a mix of strong pixels (strong gradients) and weak pixels (weak gradients) that get connected .
So a trick you can use: move both canny upper and canny lower to a certain amount, e.g. 150 until you only have the most salient edges remain then move canny lower down until you get sufficient detail.