πAI Glossary
AI terms and Glif-specific language.
AI
Term | Definition |
ComfyUI | A node-based graphical user interface (GUI) that simplifies the process of using ControlNet and other techniques for controlling and guiding image generation models. |
Canny lower/upper | Canny Edge Detection helps to identify the edges of objects in an image by finding changes in brightness. Lower Canny Edge Detection helps in identifying more subtle features in pictures. Higher Canny Edge Detection will only pick up the most obvious, strong edges. |
Conditioning | Training rules for creating the desired output. It guides the AI on specific details, like what objects, colors, or style to include, ensuring the final image matches what you asked for. |
ControlNet | A technique in machine learning that allows users to guide the output of image generation models by providing additional control signals or conditioning inputs. |
Guidance start/end | The initial direction or set of rules given to an AI system to kick-start its learning process. |
Img2img / image-to-image | A way to generate new AI images from an input image and text prompt. The output image will follow the color and composition of the input image. |
Inpainting | An AI technique that fills in damaged, missing, or obscured parts of an image by generating new content that blends seamlessly with the existing image. |
IP-Adapter | IP-Adapter stands for Image Prompt Adapter. Itβs an effective and lightweight adapter to achieve image prompt capability for pre-trained text-to-image diffusion models. |
LoRA | LoRA (Low-Rank Adaptation) fine-tunes AI models like GPT for specific tasks efficiently by updating only a small part of the model, saving time and resources. |
Negative prompt | Input used to exclude specific elements from a modelβs output. For example, if you wanted to generate an image of a cat without a tail you can use the negative prompt βno tail.β |
Outpainting | An AI technique that expands images beyond their original borders by generating new content that blends seamlessly with the existing image. |
Prompt | Input that a user feeds to an AI system in order to get a desired output. |
Prompt power | The effectiveness of instructions in guiding an AI to produce a specific desired result. The higher the prompt power, the more strictly the output will adhere to the given prompt. |
Run | The process through which an AI follows a set of instructions/prompts, processes information to achieve the desired task, and then generates the requested output, e.g. an image. |
Sampler | A guide that helps the AI decide how to create each part of an image. It makes choices at every step, determining colors, shapes, and textures, based on what it has learned. This helps in turning a simple description into a detailed, realistic picture. |
Seed | A number used as a starting point to generate an image. Setting a specific seed number will produce the same result each time. A βrandomβ seed will produce a different result each time. |
Steps | A stage in the process of drawing a picture. Each step is a moment where the AI adjusts and improves the image. More steps = more detail. |
Token | A basic unit of text that an LLM uses to understand and generate language. A token may be an entire word or parts of a word. One token is equal to approximately four English characters. |
Upscaling | Increasing the resolution of an image, enhancing its details and overall quality. |
Glif
Term | Definition |
Glif | A micro AI media generator that can include a number of inputs and outputs. |
Glifmoji | Uses reference photos of your face as an input to generate AI selfies. |
Last updated