26
submitted 1 year ago* (last edited 1 year ago) by okamiueru@lemmy.world to c/stablediffusion@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] okamiueru@lemmy.world 2 points 1 year ago

I think the most important "trick" was to loop back the refiner a couple of times. The refiner can both remove and add details, or reinforce a particular art style. By piping the latent model output into another ksampler, and repeat this 2-3 times would (for some prompts) consistently greatly improve images.

I don't know how detailed people have prompts, but these one is about 20 or so descriptive and weighted. It is very consistent in the quality and visual aesthetic, yet creative in the creature design. I'm absolutely amazed by SDXL.

[-] okamiueru@lemmy.world 1 points 1 year ago

Example of repeated iterations with the refiner:

[-] choas@lemmy.world 1 points 1 year ago

it looks like it gets a 3rd / 4th leg

[-] okamiueru@lemmy.world 2 points 1 year ago

Indeed. I usually mix down multiple iterations manually and pick the features I like.

this post was submitted on 11 Jul 2023
26 points (100.0% liked)

Stable Diffusion

1389 readers
1 users here now

Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.

Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.

founded 1 year ago
MODERATORS