On the one hand, it's impressive to see how far it's come.
On the other hand, it's still lightyears away from being acceptable on its own merits.
On the third hand, there was nothing innovative or new added that wasn't already in classic Tom and Jerry shorts. Everything about it looked recycled in the worst way.
If someone ever actually makes a full show using AI, it might be an interesting novelty, but it'll also clearly be operating above its grasp.
I don't think it'll ever work like you giving it a script and out comes something good. There's too much distance and intelligence between the words and a good result.
Instead, you'll be able to film a movie with cardboard props and have the AI make it look real. The director himself will be able to play every character and make them look and sound like actual different people.
Yeah, as a development milestone of a new tool, it's possibly interesting. Honestly it doesn't even seem significantly different from a lot of other demos in terms of capability though.
As a product in and of itself, this looks like absolute garbage, nonsense scenes with levitating men, characters popping out of existence randomly, weird animation artifacts all over.
The most interesting part was seeing how the curse of hands apparently also applies to door handles too.
Three years ago we had original DALL-E, two years ago we had Stable Diffusion 1.5, last year was SDXL, now we can generate 5 second clips where the hands and backgrounds are consistent and coherent.
I'm looking forward to remastering movies in different styles. There were a lot of cheap 80's movies that could be improved if every frame looks like a Frank Frazetta painting.
Rented that movie when I was a teenager, the cover art is great, the movie was awful. Also, having looked it up just now, I always assumed it was an 80's movie, but it came out the year AFTER Jurassic Park.
This is an interesting curiosity but I think full generation is a short term dead end. Might be possible with future techniques but it's not there yet.
What is a possibility with current tech is advanced tweening. There are models now where you can provide a start and end frame and get 5 seconds of animation that hit your targets (sort of). The models aren't specifically trained on this task so it's not perfect but the potential is there. A model that is specifically trained on tweening and following motion guides could be amazing.
In the future an artist will be able to draw one or two frames for every shot and a computer will tween the rest instead of a sweatshop. Making a feature length cartoon will be about as difficult as a graphic novel, easily achieved by a small team or even one person. You can sort of do this now with Wan2.1, but it's going to hit mainstream commercial use in just a few more iterations.
On the one hand, it's impressive to see how far it's come.
On the other hand, it's still lightyears away from being acceptable on its own merits.
On the third hand, there was nothing innovative or new added that wasn't already in classic Tom and Jerry shorts. Everything about it looked recycled in the worst way.
If someone ever actually makes a full show using AI, it might be an interesting novelty, but it'll also clearly be operating above its grasp.
I don't think it'll ever work like you giving it a script and out comes something good. There's too much distance and intelligence between the words and a good result.
Instead, you'll be able to film a movie with cardboard props and have the AI make it look real. The director himself will be able to play every character and make them look and sound like actual different people.
Yeah, as a development milestone of a new tool, it's possibly interesting. Honestly it doesn't even seem significantly different from a lot of other demos in terms of capability though.
As a product in and of itself, this looks like absolute garbage, nonsense scenes with levitating men, characters popping out of existence randomly, weird animation artifacts all over.
The most interesting part was seeing how the curse of hands apparently also applies to door handles too.
AI will always be inbred. Each time I see it I see something a teenage tracer would do on dA.
Three years ago we had original DALL-E, two years ago we had Stable Diffusion 1.5, last year was SDXL, now we can generate 5 second clips where the hands and backgrounds are consistent and coherent.
Ironically this might be the only way to 'save' some western studios and especially gaming, AI upscaling and adding to older content.
They'll just use it to add faggots and browns to older content though.
We'll also get the based version that strips that stuff out.
I'm looking forward to remastering movies in different styles. There were a lot of cheap 80's movies that could be improved if every frame looks like a Frank Frazetta painting.
I know right. Infinite potential.
Conan The Barbarian.
Absolutely, though that still holds up on it's own just fine. I was thinking of shit like this.
https://www.imdb.com/title/tt0109627/
Rented that movie when I was a teenager, the cover art is great, the movie was awful. Also, having looked it up just now, I always assumed it was an 80's movie, but it came out the year AFTER Jurassic Park.
Could use it on Andy Sidaris movies to make all of the hilarious, ridiculously fake boobs look real.
Basically we'll have one Star Wars where Han gives Greedo a bouquet of flowers and they kiss.
And one where Han shoots first and the blood and guts spray everywhere and Luke starts puking from the disgusting display.
And the original will be lost forever, along with our collective sanity.
Thanks, I hate it (A LOT)
This is terrible...
This is an interesting curiosity but I think full generation is a short term dead end. Might be possible with future techniques but it's not there yet.
What is a possibility with current tech is advanced tweening. There are models now where you can provide a start and end frame and get 5 seconds of animation that hit your targets (sort of). The models aren't specifically trained on this task so it's not perfect but the potential is there. A model that is specifically trained on tweening and following motion guides could be amazing.
In the future an artist will be able to draw one or two frames for every shot and a computer will tween the rest instead of a sweatshop. Making a feature length cartoon will be about as difficult as a graphic novel, easily achieved by a small team or even one person. You can sort of do this now with Wan2.1, but it's going to hit mainstream commercial use in just a few more iterations.
Why is Tom working in an office building?
Clinton offshored his manufacturing job in the 90s.
His computer screen changed position on the desk. Unwatchable.
The unmoving background people weren't?
Just like all other AI slop. It looks good, but there is no underlying substance.
Looks horrible. Even netflix slop is more watchable.