• Comfyui workflow reddit. br/btxrpfd/perfumes-replica-aaa-wholesale.

    Upload a ComfyUI image, get a HTML5 replica of the relevant workflow, fully zoomable and tweakable online. Just upload the JSON file, and we'll automatically download the custom nodes and models for you, plus offer online editing if necessary. Merging 2 Images together. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. (I've also edited the post to include a link to the workflow) Share, discover, & run thousands of ComfyUI workflows. Apparently they forgot the description of the workflow. The video is a person dancing on the street and the image is a picture of a monkey just standing in the jungle. I’m a heavy Tensor. I've copy-pasted this workflow from a post around here. Anyone have a workflow to do the following. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. ControlNet Depth ComfyUI workflow. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Prediffusion is a dual prompt style workflow where two prompts are used to create a better image than might be achieved with only one prompt. I'm looking for a workflow for ComfyUI that can take an uploaded image and generate an identical one, but upscaled using Ultimate SD Upscaling. This workflow allows you to change clothes or objects in an existing image If you know the required style, you can work with the IP-Adapter and upload a reference image And if you want to get new ideas or directions for design, you can create a large amount of variations in a process that is mostly automatic Welcome to the unofficial ComfyUI subreddit. Custom node support is critical IMO because any significantly complex workflow will be leveraging something custom. Here's what I want to create : Load a reference video and a reference image. Warning (OP may know this, but for others like me): There are 2 different sets of AnimateDiff nodes now. 9 but it looks like I need to switch my upscaling method. In this workflow I experiment with the cfg_scale, sigma_min and steps space randomly and use the same prompt and the rest of the settings. I don't want any changes or additions to the image, just a straightforward upscale and quality enhancement. . Screenshots have all json data embedded into image metadata. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. It's an annoying site to browse, as the workflow is previewed by the image and not by the actual workflow. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" Design better Workflow in ComfyUI Learn two easy tips to optimize your Confi UI workflow for Stable Diffusion. Please share your tips, tricks, and workflows for using this… comfy uis inpainting and masking aint perfect. 5 based models with greater detail in SDXL 0. Join the largest ComfyUI community. ENGLISH. We would like to show you a description here but the site won’t allow us. From their link: "Ultimate Creative Workflow for crafting high-quality 8k images with hyper details, elevate visuals with post-process effects, and take control with render passes. Introducing "Fast Creator v1. The key things I'm trying to achieve are: Welcome to the unofficial ComfyUI subreddit. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. I'm a complete noob when it comes to Comfy. All’interno del workflow, troverai una casella con una nota contenente istruzioni e specifiche sui settaggi per ottimizzarne l’utilizzo. Explore new ways of using Würstchen v3 architecture and gain a unique experience that sets it apart from SDXL and SD1. Hey everyone, We've built a quick way to share ComfyUI workflows through an API and an interactive widget. This workflow requires quite a few custom nodes and models to run: PhotonLCM_v10. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. I recently switched from ForgeUI to ComfyUI and started building my own workflow from scratch for generating images using just SDXL. Upscaling ComfyUI workflow. In almost all cases, it is loaded with custom nodes I've never heard of that The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. 0. One of the most annoying problem I encountered with ComfyUI is that after installing a custom node, I have to poke around and guess where in the context menu the new node is located. Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. I think it was 3DS Max. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have been replaced by the same node with the same wiring. This update includes new features and improvements to make your image creation process faster and more efficient. Watch now to enhance your productivity and streamline your process! 21K subscribers in the comfyui community. json Quirky thing at first, but after few days using it I find it way better than pnginfo in A1111. If you have any tips or advice, that would be appreciated :) Hello! Looking to dive into animatediff and am looking to learn from the mistakes of those that walked the path before meπŸ«‘πŸ™ŒπŸ«‘πŸ™ŒπŸ«‘πŸ™ŒπŸ«‘πŸ™Œ Are people using… This workflow allows you to change clothes or objects in an existing image If you know the required style, you can work with the IP-Adapter and upload a reference image And if you want to get new ideas or directions for design, you can create a large amount of variations in a process that is mostly automatic Hey everyone, I'm looking to set up a ComfyUI workflow to colorize, animate, and upscale manga pages, but I'm running into some challenges, since I'm a noob. Just a quick and simple workflow I whipped up this Welcome to the unofficial ComfyUI subreddit. Just letting everyone know. Belittling their efforts will get you banned. safetensors sd15_lora_beta. I've been using the newer ones listed here [GUIDE] ComfyUI AnimateDiff Guide/Workflows Including Prompt Scheduling - An Inner-Reflections Guide | Civitai because these are the ones that work with Prompt Scheduling, using GitHub ComfyUI workflow Question - Help Hey guys, I've generated a face with RunDiffusion, I also have different images of girls posing (took them from IG), I want to use the face I generated and the different poses from the IG models to generate more images of my own model. Thanks for posting! I've been looking for something like this. Art user, it was a great tool to get me started with SD, a ton of checkpoints, LORAs, ControlNets and modes for you to get your feet wet in SD. I love this workflow, but every second or third generation crashes at the VAE Decode step. If you are using a PC with limited resources in terms of computational power, this workflow for using Stable Diffusion with the ComfyUI interface is designed specifically for you. Create animations with AnimateDiff. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". Just my two cents. ckpt Welcome to the unofficial ComfyUI subreddit. Hey everyone, with SDXL being delayed, I thouht I might as well try to understand how to improve my workflow with 0. json” file format, which lets anyone using the ComfyUI Launcher import your workflow w/ 100% reproducibility. A lot of people are just discovering this technology, and want to show off what they created. More of this, please. A video snapshot is a variant on this theme. My seconds_total is set to 8, and the BPM I ask for in the prompt is set to 120BPM (two beats per second), meaning I get 16 beat bars. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP 21K subscribers in the comfyui community. safetensors sd15_t2v_beta. HOW TO USE: - Start with GREEN NODES write your prompt and hit queue Welcome to the unofficial ComfyUI subreddit. I also sometimes get RAM errors with 10GB of VRAM. The images look better than most 1. 4" - Free Workflow for ComfyUI. Please share your tips, tricks, and workflows for using this software to create your AI art. Every time I want to learn something new from a workflow I see someone post--especially if it is an older workflow--I inevitably have to deconstruct, reverse engineer, and carefully examine exactly how it works. 1 or not. I don't get these errors with the v0. They can create the impression of watching an animation when presented as an animated GIF or other video format. 5. SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. Table of contents. However, this can be clarified by reloading the workflow or by asking questions. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint We would like to show you a description here but the site won’t allow us. Please keep posted images SFW. This tool also lets you export your workflows in a “launcher. in many ways it is similar to your standard img2img workflow but it is a bit more controllable and more optimized for purpose than using existing art. 4". View community ranking In the Top 20% of largest communities on Reddit. Interesting idea, but I'd hope bullets 2 and 3 could be solved by something that leverages the API, preferably by injecting variables anywhere in the GUI-loaded or API-provided workflow. I'd like to know if people would be willing to share better ones or to give me advice on how to improve this one. Grab the ComfyUI workflow JSON here. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Img2Img ComfyUI workflow. So far, based on my testing, I'm sold on Latent Upscale for enhancing details while maintaining good picture quality, applying SAG and further upscaling with something that doesn't alter the image too m Hello to everyone because people ask here my full workflow, and my node system for ComfyUI but here what I am using : - First I used Cinema 4D with the sound effector mograph to create the animation, there is many tutorial online how to set it up. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Welcome to the unofficial ComfyUI subreddit. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. It is a powerful workflow that let's your imagination run wild. Comfy1111 SDXL Workflow for ComfyUI. Nice work! I had essentially the inverse case of this already, ColorMatch, using frequency separation to transfer the blurred colors instead of the details, so I put together a RestoreDetail node as well, with options to either use add/subtract or multiply/divide (both are good in different situations) as well as a guided filter option which can prevent the oversharpened edges you'll tend to This RAVE workflow in combination with AnimateDiff allows you to change a main subject character into something completely different. Experimental LCM Workflow "The Ravens" for Würstchen v3 aka Stable Cascade is up and ready for download. If you see a few red boxes, be sure to read the Questions section on the page. Now you can condition your prompts as easily as applying a CNet! Hey everyone, I'm looking to set up a ComfyUI workflow to colorize, animate, and upscale manga pages, but I'd like some other thoughts from others to help guide me on the right path. 6 min read. I hope that having a comparison was useful nevertheless. And above all, BE NICE. Colorize the manga pages, and use Canny ControlNet to isolate the text elements (speech bubbles, Japanese action characters, etc) from each panel so they aren't We would like to show you a description here but the site won’t allow us. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. That’s all. I've been especially digging the detail in the clothing more than anything else. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, which allowed me to move all the mask logic nodes behind the scenes. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. It’s the closer I could find to the experience of a Local Installation of A1111 or ComfyUI without requiring all the knowledge and time to We would like to show you a description here but the site won’t allow us. but mine do include workflows for the most part in the video description. With ComfyUI Workflow Manager /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Welcome to the unofficial ComfyUI subreddit. 5 workflow, but since they're happening with the upscaler, it seems to be an issue with the upscaler. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. Welcome to the unofficial ComfyUI subreddit. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. THANK YOU. 9. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. An example of the images you can generate with this workflow: Forgot to copy and paste my original comment in the original posting πŸ˜… This may be well known, but I just learned about it recently. SDXL Default ComfyUI workflow. If necessary, updates of the workflow will be made available on Github. When you have your ComfyUI running just drag image file from your downloads to ComfyUi opened in browser and it will load entire workflow as if you load . If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. pk zt gm da cp uj la cf by mt

Back to Top Icon