🖼️ControlNet
Guiding image generation with another image
With ControlNet, you can guide the image generation process with another image. This is a great way to produce images with a consistent visual layout.
The standard ControlNet used at glif is controlnet-canny. This is a ControlNet variant that first tries to find edges in the reference image, and then uses those edges as guidance.
Note that not all Glif image models and APIs support ControlNet - please see How to use ControlNet on how to activate it.
Examples
Baseball Card Me by @ansipedantic
Based on this reference image:
Oppenheimer Yourself by @fabian
Based on this reference image:
How to use ControlNet
Turn it on inside the advanced section of the image block:
Then put a URL to your reference image:
Make sure your URL resolves to (preferably) a .jpg
or .png
image. Please note that Imgur links might not resolve.
Conditioning strength
Below are a couple of images where the strength of the Conditioning
parameter is increased for the prompt an illustration of a cyborg
.
The first image (strength=0.0
) is always the control image. If almost no control scale is applied (0.1
), we get a base cyborg illustration. Moving to the right, the control image guides the image generation more and more.
Canny upper & Canny lower
canny upper
and canny lower
refer to parameters of the Canny edge-detection algorithm used inside the model.
Canny Upper: Pixels with intensity gradients above this value are considered strong edges.
Canny Lower: Pixels below this value are discarded. Those in between are considered weak edges and are kept only if connected to strong edges.
Here's an image to illustrate what happens:
by moving to the right, we are increasing
canny_upper
, so less pixels are considered strong pixels (pixels that have a strong gradient). In the extreme case, only the edge of the ball is considered a strong gradient. (It is an abrupt change in pixel values.)moving down, we are increasing
canny_lower
and here more and more pixels with weak gradients fall off.in the upper left quadrant the most interesting stuff is happening: with certain mixes of lower and upper, we get a mix of strong pixels (strong gradients) and weak pixels (weak gradients) that get connected .
Tip: Move both canny upper
and canny lower
to a certain amount, e.g. 150, until only the most salient edges remain, then move canny lower
down until you get sufficient detail.
Last updated