Never because the AI we constantly see shoved in our faces are nothing more than fancy machine learning algorithms that are incapable of making art without stealing from online sources ( This is why they need to stay connected to the internet ) and fingers are always tricky to do even from an art perspective so good luck getting a bloody algorithm to understand complex stuff like correct perspective and human proportions.
People grossly overestimate the capability of AI and won't be convinced even by people who know programming because they seem to secretly want the skynet larp to be real. Also the skynet larp is part of how these AI scam artists are hyping their product to make it seem like it's way more than it actually is.
Elon does this too by the way with his autonomous cars etc. nothing gets the masses clamouring for a product like fear.
Uh huh, how many GB is it to download and why do you think that is? By the way I checked the github and ComfyUI seems to be a somewhat interesting offshoot of stable diffusion AI. Did you even look at the source you listed?
This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. For some workflow examples and see what ComfyUI can do you can check out:
Also pretty funny you trying to call me a leftist when the leftists are totally invested in making AI like this a fad.
I knew you fuckers weren't paying attention to what was being written and now this just confirms it. The models for offline will still have been trained off images which is the point I was trying to make. The people downvoting me don't give a shit about any of that though and of course you're making shit up about what I wrote.
Funny how I got accused of being a reddit leftist and this is exactly what leftists do on reddit to win an argument every bloody time. Instead of responding normally, just make shit up about what the person wrote even when the post history is all there for people to see. Even if the machine learning algorithm is being trained offline that doesn't mean it hasn't started from online sources. Honestly, fuck the lot of you on this point, I'm not backing down.
Oh look, someone who thinks that when you make a prompt, the AI in real-time finds an image on Google for you.
I have absolutely no respect for people who think outright lying like that is okay.
I'm not making anything up, this is what you said:
are incapable of making art without stealing from online sources ( This is why they need to stay connected to the internet )
Please enlighten me on what this is supposed to mean, if not you saying it "steals" images from the internet when you make a prompt. If you were really just saying it has to be trained on images, then 1. that's obvious because every type of AI has to be trained so why even mention it, and 2. what does "stay connected to the internet" mean if it can be run offline?
I explained this with how the machine learning algorithm has been caught taking sources from specific artists and they noticed depending on the keywords inputted by the user that helps give the art context, doesn't always work I find, I've experimented with it myself so I do know what I'm writing about
If the programmers who have created this 'offline' AI have been ethical then they'll have taken images from some online source that isn't copyrighted or owned by an artist to help train their generation models. I wouldn't be surprised though if they just nabbed the sources from online and then gave you a bunch of models to download and carry on the generation process offline.
What happens with image generation especially is depending on what sort of keywords you're using the algorithm will eventually run out of different images to give you when enough images get generated and it quickly loses the "zomg thinking AI" mystique to it.
I've seen this happen frequently with text examples. Since programming is extremely niche, it's very easy to bugger up the algorithm and make it spew gibberish because it's searching online sources for a correct answer to a programming problem.
There was an experiment done on a godot forum where some muppet tried linking a ChatGPT and it would sometimes not even be capable of generating any code or would directly copy-paste irrelevant posts in answer to the question and it was an absolute disaster because it was confusing noobs who were trying to genuinely find things out.
TLDR: You downloaded the models for the offline machine learning and plugged them and the models will likely be using an online source. Yes it is not connected to the internet, but it is likely using data from online source that were grabbed at a specific time and that is how the machine learning works because it needs previous sources to work from and train itself. The algorithm will then continue training offline but using all of that previous data per generation to continue the image generation.
By the way, I confirmed this with other programmers as well, image generation and these chat bots really aren't that impressive.
Mate you are British, you know nothing but socialism xD
More serious though if argue size of files and space storage perhaps point them to the old demo scene in order to help them understand how to storage better than AAA games.
The vast majority of AAA bytes are wasted space (not making a value judgement here -- this is information theory). Textures are legit. We just don't care.
People have been rightly making the point about big studios and how they're having a major brain drain all over. There's no helping them, even if I went out of my way to offer them a solution they'd probably call me a white supremacist bigot after glancing through my social media for five seconds.
Easily one of the biggest problems big studios have is they refuse to adapt and their choice of software is dragging them down hard. Instead the end game seems to be to milk the fanbase for all it's worth and try to attract leftists who don't play video games in the first place. Oh that and the polygon vomit along with 4k texture files certainly don't help the file sizes. It seems it's mostly texture files that are responsible for the level of bloat in these games.
Now that I think about it a lot of the pro's make heavy use of Substance Painter and then all of a sudden it makes sense.
Yeah I don't think they actually generate these figures from 3d models but rather 2d images. 3d, uh, stuff goes on a lot with hands. And the image processor just doesn't really have any context of the actual model for what it's drawing. It's like if you showed someone a lion from the front and then asked them to draw it from the back. They would probably mess up.
It's not that hard, conceptually, to define "a finger" alongside "a wrist", "a thumb", and that a hand has exactly 4, 1, and 1 in such and such approximate locations.
It would be "seeding" info, rather than a "pure" learning algorithm, though I don't see why having hard-coded prompts in a tool such as an art-asset-maker is a bad thing. You can seed into a chatterbot that it is to roleplay as a game show host, you can certainly seed into a painterbot the concept of a "hand" in its Platonic pure form.
But at that point you're writing an art-making program, not fleecing the public with fancy AI words, so...
I think it just needs a step back and new approach to that, with something like the seeding info yeah. I think working off of a 3d model though would solve it.
3d generation of 3d models, with an adjustable default base mesh, then rendered pictures from that, that's what will fix the fingers. The issue with the hard coded seed is that you don't always see 5
3d generation has just started, it's as primitive as dall-e mini was when everything went wild. So give it another 2 years.
Under some very controlled circumstances maybe you could pull it off, but you're correct that this would then simply turn into an art program for professionals rather than anything the public would have any interest in. Fuck I hate normies.
Never because the AI we constantly see shoved in our faces are nothing more than fancy machine learning algorithms that are incapable of making art without stealing from online sources ( This is why they need to stay connected to the internet ) and fingers are always tricky to do even from an art perspective so good luck getting a bloody algorithm to understand complex stuff like correct perspective and human proportions.
People grossly overestimate the capability of AI and won't be convinced even by people who know programming because they seem to secretly want the skynet larp to be real. Also the skynet larp is part of how these AI scam artists are hyping their product to make it seem like it's way more than it actually is.
Elon does this too by the way with his autonomous cars etc. nothing gets the masses clamouring for a product like fear.
You are a far leftist. You know nothing about machine learning. Go back to reddit.
Utterly false. ComfyUI can run completely locally.
Uh huh, how many GB is it to download and why do you think that is? By the way I checked the github and ComfyUI seems to be a somewhat interesting offshoot of stable diffusion AI. Did you even look at the source you listed?
https://github.com/comfyanonymous/ComfyUI
Also pretty funny you trying to call me a leftist when the leftists are totally invested in making AI like this a fad.
I have literally unplugged my internet and ran it, you mong.
You didn't answer the question, how big is the install size?
Oh look, someone who thinks that when you make a prompt, the AI in real-time finds an image on Google for you.
Please learn what machine learning is and until then, stop acting like you know better than us.
I knew you fuckers weren't paying attention to what was being written and now this just confirms it. The models for offline will still have been trained off images which is the point I was trying to make. The people downvoting me don't give a shit about any of that though and of course you're making shit up about what I wrote.
Funny how I got accused of being a reddit leftist and this is exactly what leftists do on reddit to win an argument every bloody time. Instead of responding normally, just make shit up about what the person wrote even when the post history is all there for people to see. Even if the machine learning algorithm is being trained offline that doesn't mean it hasn't started from online sources. Honestly, fuck the lot of you on this point, I'm not backing down.
I have absolutely no respect for people who think outright lying like that is okay.
I'm not making anything up, this is what you said:
Please enlighten me on what this is supposed to mean, if not you saying it "steals" images from the internet when you make a prompt. If you were really just saying it has to be trained on images, then 1. that's obvious because every type of AI has to be trained so why even mention it, and 2. what does "stay connected to the internet" mean if it can be run offline?
I explained this with how the machine learning algorithm has been caught taking sources from specific artists and they noticed depending on the keywords inputted by the user that helps give the art context, doesn't always work I find, I've experimented with it myself so I do know what I'm writing about
If the programmers who have created this 'offline' AI have been ethical then they'll have taken images from some online source that isn't copyrighted or owned by an artist to help train their generation models. I wouldn't be surprised though if they just nabbed the sources from online and then gave you a bunch of models to download and carry on the generation process offline.
What happens with image generation especially is depending on what sort of keywords you're using the algorithm will eventually run out of different images to give you when enough images get generated and it quickly loses the "zomg thinking AI" mystique to it.
I've seen this happen frequently with text examples. Since programming is extremely niche, it's very easy to bugger up the algorithm and make it spew gibberish because it's searching online sources for a correct answer to a programming problem.
There was an experiment done on a godot forum where some muppet tried linking a ChatGPT and it would sometimes not even be capable of generating any code or would directly copy-paste irrelevant posts in answer to the question and it was an absolute disaster because it was confusing noobs who were trying to genuinely find things out.
TLDR: You downloaded the models for the offline machine learning and plugged them and the models will likely be using an online source. Yes it is not connected to the internet, but it is likely using data from online source that were grabbed at a specific time and that is how the machine learning works because it needs previous sources to work from and train itself. The algorithm will then continue training offline but using all of that previous data per generation to continue the image generation.
By the way, I confirmed this with other programmers as well, image generation and these chat bots really aren't that impressive.
Mate you are British, you know nothing but socialism xD
More serious though if argue size of files and space storage perhaps point them to the old demo scene in order to help them understand how to storage better than AAA games.
The vast majority of AAA bytes are wasted space (not making a value judgement here -- this is information theory). Textures are legit. We just don't care.
People have been rightly making the point about big studios and how they're having a major brain drain all over. There's no helping them, even if I went out of my way to offer them a solution they'd probably call me a white supremacist bigot after glancing through my social media for five seconds.
Easily one of the biggest problems big studios have is they refuse to adapt and their choice of software is dragging them down hard. Instead the end game seems to be to milk the fanbase for all it's worth and try to attract leftists who don't play video games in the first place. Oh that and the polygon vomit along with 4k texture files certainly don't help the file sizes. It seems it's mostly texture files that are responsible for the level of bloat in these games.
Now that I think about it a lot of the pro's make heavy use of Substance Painter and then all of a sudden it makes sense.
Yeah I don't think they actually generate these figures from 3d models but rather 2d images. 3d, uh, stuff goes on a lot with hands. And the image processor just doesn't really have any context of the actual model for what it's drawing. It's like if you showed someone a lion from the front and then asked them to draw it from the back. They would probably mess up.
It's not that hard, conceptually, to define "a finger" alongside "a wrist", "a thumb", and that a hand has exactly 4, 1, and 1 in such and such approximate locations.
It would be "seeding" info, rather than a "pure" learning algorithm, though I don't see why having hard-coded prompts in a tool such as an art-asset-maker is a bad thing. You can seed into a chatterbot that it is to roleplay as a game show host, you can certainly seed into a painterbot the concept of a "hand" in its Platonic pure form.
But at that point you're writing an art-making program, not fleecing the public with fancy AI words, so...
I think it just needs a step back and new approach to that, with something like the seeding info yeah. I think working off of a 3d model though would solve it.
3d generation of 3d models, with an adjustable default base mesh, then rendered pictures from that, that's what will fix the fingers. The issue with the hard coded seed is that you don't always see 5
3d generation has just started, it's as primitive as dall-e mini was when everything went wild. So give it another 2 years.
Under some very controlled circumstances maybe you could pull it off, but you're correct that this would then simply turn into an art program for professionals rather than anything the public would have any interest in. Fuck I hate normies.
My dude nobody here is a normie.
Then they need to stop being ones when it comes to AI.