UK

Comfyui user manual example


Comfyui user manual example. 5 checkpoint model. Recommended Workflows. Here is an example: You can load this image in ComfyUI to get the workflow. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. For some workflow examples and see what ComfyUI can do you can check out: The UI now will support adding models and any missing node pip installs. In this post we'll show you some example workflows you can import and get started straight away. Here is an example of how to use upscale models like ESRGAN. This guide demystifies the process of setting up and using ComfyUI, making it an essential read for anyone looking to harness the power of AI for image generation. You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. The image below is the empty workflow with Efficient Loader and KSampler (Efficient) added and connected to each other Aug 1, 2024 · For use cases please check out Example Workflows. Example detection using the blazeface_back_camera: AnimateDiff_00004. mp4. Join the largest ComfyUI community. Workflows presented in this article are available to download from the Prompting Pixels site or in the sidebar. ComfyUI Examples. Dec 10, 2023 · ComfyUI should be capable of autonomously downloading other controlnet-related models. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. 1 with ComfyUI Jul 6, 2024 · The best way to learn ComfyUI is by going through examples. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI open in new window. AuraFlow. This guide is about how to setup ComfyUI on your Windows computer to run Flux. For example: 896x1152 or 1536x640 are good resolutions. SDXL Turbo is a SDXL model that can generate consistent images in a single step. Here's a list of example workflows in the official ComfyUI repo. ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. The importance of parts of the prompt can be up or down-weighted by enclosing the specified part of the prompt in brackets using the following syntax: (prompt:weight). This image contain 4 different areas: night, evening, day, morning. import { app } from ". Img2Img works by loading an image like this example image open in new window, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. In this example I used albedobase-xl. T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. Efficient Loader node in ComfyUI KSampler(Efficient) node in ComfyUI. 5. 1. We will go through some basic workflow examples. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. js"; /* In setup(), add the setting */ . This section is a guide to the ComfyUI user interface, including basic operations, menu settings, node operations, and other common user interface options. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI (opens in a new tab). . The resulting SDXL Examples. Note that we use a denoise value of less than 1. These are examples demonstrating how to do img2img. This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Since ESRGAN Jul 13, 2024 · Here is an example workflow. The denoise controls the amount of noise added to the image. ComfyUI manual; Core Nodes; Interface; Examples. Mar 21, 2024 · Expanding the borders of an image within ComfyUI is straightforward, and you have a couple of options available: basic outpainting through native nodes or with the experimental ComfyUI-LaMA-Preprocessor custom node. Download it and place it in your input folder. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. Contribute to kijai/ComfyUI-LivePortraitKJ development by creating an account on GitHub. Here is an example of how the esrgan upscaler can be used for the upscaling step. The proper way to use it is with the new SDTurbo Hunyuan DiT Examples. You can Load these images in ComfyUI open in new window to get the full workflow. Hunyuan DiT is a diffusion model that understands both english and chinese. Dec 19, 2023 · In the standalone windows build you can find this file in the ComfyUI directory. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Sep 7, 2024 · GLIGEN Examples. Advanced ComfyUI Template For Commercial: 2: ComfyUI-Template-Pack: 10 ComfyUI Templates for Beginner: 3: ComfyUI-101Days: My Daily ComfyUI Workflow Creation: 4 You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. 4 days ago · Here's the cool part: you don't have to ask each question separately. One of the best parts about ComfyUI is how easy it is to download and swap between workflows. Img2Img works by loading an image like this example image (opens in a new tab), converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. You can then load up the following image in ComfyUI to get the workflow: A ComfyUI guide ComfyUI is a simple yet powerful Stable Diffusion UI with a graph and nodes interface. 0 (the min_cfg in the node) the middle frame 1. Restarting your ComfyUI instance on ThinkDiffusion. up and down weighting. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow Aug 14, 2024 · 🤔 ComfyUI is recommended for an easy local installation of AI models, as it simplifies the process. Sep 7, 2024 · Hypernetwork Examples. ComfyUI Examples; 2 Pass Txt2Img (Hires fix) Examples; 3D Examples; Area Composition Examples; ControlNet and T2I-Adapter Examples; Frequently Asked Questions; GLIGEN Examples. Welcome to the comprehensive, community-maintained documentation for ComfyUI, the cutting-edge, modular Stable Diffusion GUI and backend. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. It covers the following topics: Introduction to Flux. Save this image then load it or drag it on ComfyUI to get the workflow. The example below shows how to use the KSampler in an image to image task, by connecting a model, a positive and negative embedding, and a latent image. Why ComfyUI? TODO. This image should embody the essence of your character and serve as the foundation for the entire You signed in with another tab or window. (the cfg set in the sampler). example. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. fal. Follow the ComfyUI manual installation instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed. You signed in with another tab or window. GLIGEN Examples; Hypernetwork Examples; Img2Img Examples; Inpaint Examples; LCM Examples; Lora Examples; Model Merging . This repo contains examples of what is achievable with ComfyUI. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version Follow the ComfyUI manual installation instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed. This guide is designed to help you quickly get started with ComfyUI, run your first image generation, and explore advanced features. Upload Input Image. The KSampler is the core of any workflow and can be used to perform text to image and image to image generation tasks. Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. After studying some essential ones, you will start to understand how to make your own. Here is an example of how to create a CosXL model from a regular SDXL model with merging. Img2Img Examples. Unlike other Stable Diffusion tools that have basic text fields where you enter values and information for generating an image, a node-based interface is different in the sense that you’d have to create nodes to build a workflow to generate images. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. This way frames further away from the init frame get a gradually higher cfg. Flux is a family of diffusion models by black forest labs. 75 and the last frame 2. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. Advanced Merging CosXL. In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. Flux Examples. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. The following images can be loaded in ComfyUI to get the full workflow. You can try them out with this example workflow. Then press “Queue Prompt” once and start writing your prompt. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. You can use it to connect up models, prompts, and other nodes to create your own unique workflow. Here is a link to download pruned versions of the supported GLIGEN model files (opens in a new tab). You can use more steps to increase the quality. There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the releases page. 34. Add and read a setting. By facilitating the design and execution of sophisticated stable diffusion pipelines, it presents users with a flowchart-centric approach. 1. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. 1 ComfyUI install guidance, workflow and example. You set up a template, and the AI fills in the blanks. Sep 7, 2024 · Inpaint Examples. These are examples demonstrating how to use Loras. Windows. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. ComfyUI stands as an advanced, modular GUI engineered for stable diffusion, characterized by its intuitive graph/nodes interface. safetensors. Some custom_nodes do still Here’s an example of creating a noise object which mixes the noise from two sources. SD3 Controlnets by InstantX are also supported. Issue & PR a comfyui custom node for MimicMotion. Additional discussion and help can be found here . This is the input image that will be used in this example source: Here is how you use the depth T2I-Adapter: Here is how you use the depth Controlnet. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. /. Download hunyuan_dit_1. Direct link to download. Sep 7, 2024 · Img2Img Examples. These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. The requirements are the CosXL base model, the SDXL base model and the SDXL model you want to convert. 1; Overview of different versions of Flux. 💾 The installation process for ComfyUI is straightforward and does not require extensive technical knowledge. If you want do do merges in 32 bit float launch ComfyUI with: –force-fp32. Put the GLIGEN model files in the ComfyUI/models/gligen directory. Examples of what is achievable with ComfyUI. class Noise_MixedNoise : def __init__ ( self , nosie1 , noise2 , weight2 ) : self . noise2 = noise2 self . safetensors, stable_cascade_inpainting. Note that in ComfyUI txt2img and img2img are the same node. You can Load these images in ComfyUI to get the full workflow. These are examples demonstrating the ConditioningSetArea node. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. On a machine equipped with a 3070ti, the generation should be completed in about 3 minutes. Search the Efficient Loader and KSampler (Efficient) node in the list and add it to the empty workflow. I then recommend enabling Extra Options -> Auto Queue in the interface. SD3 ControlNet. The InstantX team released a few ControlNets for SD3 and they are supported in ComfyUI. Sep 7, 2024 · SDXL Examples. noise1 = noise1 self . In this example we will be using this image. In the example below we use a different VAE to encode an image to latent space, and decode the result of the Ksampler. safetensors and put it in your ComfyUI/checkpoints directory. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: example. A reminder that you can right click images in the LoadImage node and edit them with the mask editor. Reload to refresh your session. Simply download, extract with 7-Zip and run. You switched accounts on another tab or window. Install. yaml and edit it with your favorite text editor. A ComfyUI StableZero123 Custom Node Use playground-v2 model with ComfyUI Generative AI for Krita – using LCM on ComfyUI Basic auto face detection and refine example Enabling face fusion and style migration These are examples demonstrating how to do img2img. Upscale Model Examples. test on 2080ti 11GB torch==2 Sep 7, 2024 · Lora Examples. 1; Flux Hardware Requirements; How to install and use Flux. In the above example the first frame will be cfg 1. 🌐 To get started with ComfyUI, visit the GitHub page and download the latest release. All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Annotated Examples. Learn about node connections, basic operations, and handy shortcuts. Dive into the basics of ComfyUI, a powerful tool for AI-based image generation. /scripts/app. It is now supported on ComfyUI. The initial set includes three templates: Simple Template; Intermediate For more details, you could follow ComfyUI repo. Note that this example uses the DiffControlNetLoader node because the controlnet used is a diff Feb 24, 2024 · ComfyUI is a node-based interface to use Stable Diffusion which was created by comfyanonymous in 2023. For example, you might ask: "{eye color} eyes, {hair style} {hair color} hair, {ethnicity} {gender}, {age number} years old" The AI looks at the picture and might say: "Brown eyes, curly black hair, Asian female, 25 years Lora Examples. This is what the workflow looks like in ComfyUI: ComfyUI User Interface. Share, discover, & run thousands of ComfyUI workflows. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Flux. A growing collection of fragments of example code… Comfy UI preference settings. ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. Interface Description. 0. Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. 2. Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. The image below is a screenshot of the ComfyUI interface. At times you might wish to use a different VAE than the one that came loaded with the Load Checkpoint node. Hunyuan DiT 1. Contribute to AIFSH/ComfyUI-MimicMotion development by creating an account on GitHub. The first step in using the ComfyUI Consistent Character workflow is to select the perfect input image. weight2 = weight2 @property def seed ( self ) : return ComfyUI Provides a variety of ways to finetune your prompts to better reflect your intention. Area Composition Examples. ai in collaboration with Simo released an open source MMDiT text to image model yesterday called AuraFlow. So, we will learn how to do things in ComfyUI in the simplest text-to-image workflow. Rename this file to extra_model_paths. You signed out in another tab or window. 0 + other_model If you are familiar with the "Add Difference" option in other UIs this is how to do it in ComfyUI. Easy starting workflow. All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. This could be used to create slight noise variations by varying weight2 . Example. The ComfyUI interface includes: The main operation interface; Workflow node In this tutorial, we will guide you through the steps of using the ComfyUI Consistent Character workflow effectively. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. What is ComfyUI. vrpfo cxedzmpi apq yjrnzj zqiio awrsh xka nhri ayfis euzn


-->