One very good reason why AI can’t get human hands right
(www.revolver.news)
Comments (16)
sorted by:
Aren’t we already past this? Article has today’s date but cites tweets from January.
We have progressed beyond “haha the dumb AI can’t even do hands right” and “well if it’s doing hands right then it must be art theft!” to “oh no it’s actually learning and isn’t stealing art, we can’t keep up with it, we need to stop all advancement for at least six months so we can plan out how next to overreact” in the span of just a few months.
It hasn't been able to do teeth properly since day one and it's still pretty shit at it.
I love a good face full of fleshteeth
Either that or there's just a couple dozen more than there should be.
The more I learn about "AI", the more I agree it is just a fad and not "the next big thing".
AI is kindof a scam
Yes the "adam ruins everything" guy is douchey but he makes a lot of correct points here: https://youtu.be/FdyolKtVNn8?t=1285
all it does is "guess the next word" and present the illusion of intelligence, not real intelligence. all it does is suck up huge amounts of data like the whole internet, mash it all together, and spit out a response from that giant amalgamated stew.
it isn't creative, it's just a really really big blender or food processor which spits out the pink slime of human communication.
I liked the Mass Effect comparison between Artificial Intelligence (sapient) and Virtual Intelligence (illusion of sapience).
What we have now doesn't even reach the VI standard, much less the AI one.
how is that functionally different from thinking though?
It's getting awful close to Turing test levels though.
Because if I ask you to think of a new Animal that's never been seen before, but follows natural laws. You would probably imagine something and run it through a list of rules to make sure it's not stupid and could actually function. You would have to base that off every other animal that already exists. You can't create a "new unique" animal because everything you think off is an already existing animal.
It's not really. There's good reason to believe that human intelligence is mostly just predicting what's next, with some secret sauce to give direction and motivation.
But to be considered intelligent it needs to be autonomous and to learn.
Hooking up ChatGPT to a camera and body instead of a chatbox is not hard - it could happen tomorrow - but making it learn in real time is impossible with today's technology. The method of learning just takes too many computations. However, it can be given the appearance of learning since it can already look at 32k symbols of context and with a big enough context it can pretend intelligence.
Yeesh, it reminded me of that Ninth Doctor episode with the nanites that didn't quite understand what a human face was supposed to look like.
"Are you my mummy?"
Great two-parter. Everything Doctor Who should be.