It's pretty much already done, for stable diffusion. Just use a Textual Inversion like Bad-Hands-5 in the negative prompt.
The problem is the big online AI don't have the image to image, masking, and inpainting capability, like Stable Diffusion. With SD you can work with an image until it is good. With bing or who ever else, you only get one shot to get the correct image.
LOL the first way I parsed this was "how long until all the top programmers have 'extra fingers' and you can't work as a professional programmer without them?"
If you actually know what you're doing it's a solved issue in many situations. This type of image is a bit tricky because there are so many people in the image and they are further from the viewer than average. If you were to generate this at a larger scale, one section at a time, it probably wouldn't be an issue.
Also, something like this is super easy to fix with inpainting.
Odds are someone was just lazy and grabbed the first image that came out similar to what they were thinking of.
Never because the AI we constantly see shoved in our faces are nothing more than fancy machine learning algorithms that are incapable of making art without stealing from online sources ( This is why they need to stay connected to the internet ) and fingers are always tricky to do even from an art perspective so good luck getting a bloody algorithm to understand complex stuff like correct perspective and human proportions.
People grossly overestimate the capability of AI and won't be convinced even by people who know programming because they seem to secretly want the skynet larp to be real. Also the skynet larp is part of how these AI scam artists are hyping their product to make it seem like it's way more than it actually is.
Elon does this too by the way with his autonomous cars etc. nothing gets the masses clamouring for a product like fear.
Uh huh, how many GB is it to download and why do you think that is? By the way I checked the github and ComfyUI seems to be a somewhat interesting offshoot of stable diffusion AI. Did you even look at the source you listed?
This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. For some workflow examples and see what ComfyUI can do you can check out:
Also pretty funny you trying to call me a leftist when the leftists are totally invested in making AI like this a fad.
I knew you fuckers weren't paying attention to what was being written and now this just confirms it. The models for offline will still have been trained off images which is the point I was trying to make. The people downvoting me don't give a shit about any of that though and of course you're making shit up about what I wrote.
Funny how I got accused of being a reddit leftist and this is exactly what leftists do on reddit to win an argument every bloody time. Instead of responding normally, just make shit up about what the person wrote even when the post history is all there for people to see. Even if the machine learning algorithm is being trained offline that doesn't mean it hasn't started from online sources. Honestly, fuck the lot of you on this point, I'm not backing down.
Oh look, someone who thinks that when you make a prompt, the AI in real-time finds an image on Google for you.
I have absolutely no respect for people who think outright lying like that is okay.
I'm not making anything up, this is what you said:
are incapable of making art without stealing from online sources ( This is why they need to stay connected to the internet )
Please enlighten me on what this is supposed to mean, if not you saying it "steals" images from the internet when you make a prompt. If you were really just saying it has to be trained on images, then 1. that's obvious because every type of AI has to be trained so why even mention it, and 2. what does "stay connected to the internet" mean if it can be run offline?
Mate you are British, you know nothing but socialism xD
More serious though if argue size of files and space storage perhaps point them to the old demo scene in order to help them understand how to storage better than AAA games.
Yeah I don't think they actually generate these figures from 3d models but rather 2d images. 3d, uh, stuff goes on a lot with hands. And the image processor just doesn't really have any context of the actual model for what it's drawing. It's like if you showed someone a lion from the front and then asked them to draw it from the back. They would probably mess up.
It's not that hard, conceptually, to define "a finger" alongside "a wrist", "a thumb", and that a hand has exactly 4, 1, and 1 in such and such approximate locations.
It would be "seeding" info, rather than a "pure" learning algorithm, though I don't see why having hard-coded prompts in a tool such as an art-asset-maker is a bad thing. You can seed into a chatterbot that it is to roleplay as a game show host, you can certainly seed into a painterbot the concept of a "hand" in its Platonic pure form.
But at that point you're writing an art-making program, not fleecing the public with fancy AI words, so...
I think it just needs a step back and new approach to that, with something like the seeding info yeah. I think working off of a 3d model though would solve it.
3d generation of 3d models, with an adjustable default base mesh, then rendered pictures from that, that's what will fix the fingers. The issue with the hard coded seed is that you don't always see 5
3d generation has just started, it's as primitive as dall-e mini was when everything went wild. So give it another 2 years.
Under some very controlled circumstances maybe you could pull it off, but you're correct that this would then simply turn into an art program for professionals rather than anything the public would have any interest in. Fuck I hate normies.
So any programmers know how long till the whole extra fingers is more or less resolved?
It's pretty much already done, for stable diffusion. Just use a Textual Inversion like Bad-Hands-5 in the negative prompt.
The problem is the big online AI don't have the image to image, masking, and inpainting capability, like Stable Diffusion. With SD you can work with an image until it is good. With bing or who ever else, you only get one shot to get the correct image.
Mostly solved, but you need to be observant. MidJourney and Stable Diffusion allow you to edit photos till you get what you want and then upscale.
Once people start posting more pictures of just one hand.
The fundamental issue is the image AI can't count.
So more 5-fingered hands will just increase the likelihood of a normal hand not actually solve the issue.
LOL the first way I parsed this was "how long until all the top programmers have 'extra fingers' and you can't work as a professional programmer without them?"
If you actually know what you're doing it's a solved issue in many situations. This type of image is a bit tricky because there are so many people in the image and they are further from the viewer than average. If you were to generate this at a larger scale, one section at a time, it probably wouldn't be an issue.
Also, something like this is super easy to fix with inpainting.
Odds are someone was just lazy and grabbed the first image that came out similar to what they were thinking of.
Never because the AI we constantly see shoved in our faces are nothing more than fancy machine learning algorithms that are incapable of making art without stealing from online sources ( This is why they need to stay connected to the internet ) and fingers are always tricky to do even from an art perspective so good luck getting a bloody algorithm to understand complex stuff like correct perspective and human proportions.
People grossly overestimate the capability of AI and won't be convinced even by people who know programming because they seem to secretly want the skynet larp to be real. Also the skynet larp is part of how these AI scam artists are hyping their product to make it seem like it's way more than it actually is.
Elon does this too by the way with his autonomous cars etc. nothing gets the masses clamouring for a product like fear.
You are a far leftist. You know nothing about machine learning. Go back to reddit.
Utterly false. ComfyUI can run completely locally.
Uh huh, how many GB is it to download and why do you think that is? By the way I checked the github and ComfyUI seems to be a somewhat interesting offshoot of stable diffusion AI. Did you even look at the source you listed?
https://github.com/comfyanonymous/ComfyUI
Also pretty funny you trying to call me a leftist when the leftists are totally invested in making AI like this a fad.
I have literally unplugged my internet and ran it, you mong.
Oh look, someone who thinks that when you make a prompt, the AI in real-time finds an image on Google for you.
Please learn what machine learning is and until then, stop acting like you know better than us.
I knew you fuckers weren't paying attention to what was being written and now this just confirms it. The models for offline will still have been trained off images which is the point I was trying to make. The people downvoting me don't give a shit about any of that though and of course you're making shit up about what I wrote.
Funny how I got accused of being a reddit leftist and this is exactly what leftists do on reddit to win an argument every bloody time. Instead of responding normally, just make shit up about what the person wrote even when the post history is all there for people to see. Even if the machine learning algorithm is being trained offline that doesn't mean it hasn't started from online sources. Honestly, fuck the lot of you on this point, I'm not backing down.
I have absolutely no respect for people who think outright lying like that is okay.
I'm not making anything up, this is what you said:
Please enlighten me on what this is supposed to mean, if not you saying it "steals" images from the internet when you make a prompt. If you were really just saying it has to be trained on images, then 1. that's obvious because every type of AI has to be trained so why even mention it, and 2. what does "stay connected to the internet" mean if it can be run offline?
Mate you are British, you know nothing but socialism xD
More serious though if argue size of files and space storage perhaps point them to the old demo scene in order to help them understand how to storage better than AAA games.
Yeah I don't think they actually generate these figures from 3d models but rather 2d images. 3d, uh, stuff goes on a lot with hands. And the image processor just doesn't really have any context of the actual model for what it's drawing. It's like if you showed someone a lion from the front and then asked them to draw it from the back. They would probably mess up.
It's not that hard, conceptually, to define "a finger" alongside "a wrist", "a thumb", and that a hand has exactly 4, 1, and 1 in such and such approximate locations.
It would be "seeding" info, rather than a "pure" learning algorithm, though I don't see why having hard-coded prompts in a tool such as an art-asset-maker is a bad thing. You can seed into a chatterbot that it is to roleplay as a game show host, you can certainly seed into a painterbot the concept of a "hand" in its Platonic pure form.
But at that point you're writing an art-making program, not fleecing the public with fancy AI words, so...
I think it just needs a step back and new approach to that, with something like the seeding info yeah. I think working off of a 3d model though would solve it.
3d generation of 3d models, with an adjustable default base mesh, then rendered pictures from that, that's what will fix the fingers. The issue with the hard coded seed is that you don't always see 5
3d generation has just started, it's as primitive as dall-e mini was when everything went wild. So give it another 2 years.
Under some very controlled circumstances maybe you could pull it off, but you're correct that this would then simply turn into an art program for professionals rather than anything the public would have any interest in. Fuck I hate normies.
My dude nobody here is a normie.