Reusable characters

@Paul_S , last night at the AI Artists meetup, you talked about a few different techniques for creating a consistent generated character across images (storytelling). Would you mind sharing your experiences and investigations here? I know several of us are interested in that topic.

Using Stable Diffusion you can train a model to produce consistent characters here is a great example usage Is This The Death of VFX? - YouTube

1 Like

Yeah this is a great way to create a consistent character, what Ernie has shared.

Here’s a tutorial from Olivio about how to train your own Textual inversion embedded model with Dreambooth in Auto1111:

Also here’s a little about textual inversions, essentially what is being done with the dreambooth extension, except it embeds the textual inversion into the model. One could have separate textual inversion files and use across multiple models:

I actually haven’t experimented with creating my own textual inversions yet but I have downloaded others and tested them.

Also if you use Stable DIffusion with a textual embedding across several video frames you could something like Ebsynth to do a style transfer as Ernie mentioned:

1 Like

Here’s a method being used with Unreal Engine and Stable Diffusion:

https://twitter.com/CoffeeVectors/status/1632825278148870145?s=20

Coffee Vectors on twitter has always been a great resource for me.