Single-image prompt-based video generation will probably always have the problem of morphing the subject in whatever way is closest to the video generator's training data. With this, I would think the start-image -> end-image models would do better, particularly because you DO have both start and end images in the comic book, and likely just want to make the in-between frames make sense. Would you mind if I took a couple of your images and gave it a try on my local machine?
This is why so far, as an artist, AI is only good enough to generate or modify reference images. It doesn't have to be right because as long as it is more or less what I need, I can adjust that in the drawing process.
Single-image prompt-based video generation will probably always have the problem of morphing the subject in whatever way is closest to the video generator's training data. With this, I would think the start-image -> end-image models would do better, particularly because you DO have both start and end images in the comic book, and likely just want to make the in-between frames make sense. Would you mind if I took a couple of your images and gave it a try on my local machine?
No, that would be awesome.
Love this - looking forward to the result a
Not terrible, but I understand your frustration. Here’s a comparable article from me that didn’t age well:
https://undergrounddesigns.substack.com/p/why-cant-chatgpt-draw-a-centaur?
Hilarious read, thanks.
Indians all the way down! I love it! LOL
This is why so far, as an artist, AI is only good enough to generate or modify reference images. It doesn't have to be right because as long as it is more or less what I need, I can adjust that in the drawing process.
It can punch up human art, but is not consistent or accurate when generating something from a prompt.