4
hiddenempire 4 points ago +4 / -0

This isn't even a joke about illegals, it's a pun. Leave it to a third worlder to fail to understand this.

9
hiddenempire 9 points ago +9 / -0

The AI fantasized

Guaranteed this didn't actually happen, and what actually happened was this:

User: Fantasize about doing XYZ things

"AI": I wonder what it would be like to do XYZ things

OMG SO SCURRY

These people are clowns. Dangerous clowns, but nonetheless.

2
hiddenempire 2 points ago +2 / -0

This DataRepublican tard just had a sperg out about how great illegals are. More subversive than Red China could ever possibly be.

4
hiddenempire 4 points ago +4 / -0

I thought the lawsuit against Epic for making an AI voiceovered Darth Vader was funny because you couldn't ask for a better advertisement for never hiring any SAG-AFTRA voice actors, since it gives them unlimited leeway to sue you for things they have no bearing on. The AI Vader simply couldn't exist with pre-canned voice lines, so what remedy do they want other than either stealing money or preventing Epic from doing it in the first place?

Remember kids, if you need voice acting, never hire a union VA, or they will sue you for things they should have no say in.

1
hiddenempire 1 point ago +1 / -0

Not really but okay. They don't self-improve because they don't keep any results of in-context learning. That's the point. Yes you can fake it with some sort of longer form memory but you still run into the context limit eventually, and summarized information doesn't have the same effect on inference as the full context anyway.

It's also simply ridiculous to compare it to humans since LLMs don't work the same way at all - and this just highlights my point, because in human brains learning results in reconfiguration of connections between neurons, i.e., long term self-improvement. The analogous form in LLMs would be reconfiguring their weight matrices run-to-run, which I already addressed (doesn't exist right now).

1
hiddenempire 1 point ago +1 / -0

Ah nice, I see docker and I just nope out of a lot of things given that it takes up multiple gigs just sitting there, even without a container, and is generally pretty garbage on Windows (which I have to use for work).

3
hiddenempire 3 points ago +3 / -0

Shh, let them cook. Maybe this will put some backbone into Trump. Let them bully him into being the dictator they think he is, there's no downside.

3
hiddenempire 3 points ago +3 / -0

Tencent could probably pull it off but they'd never go for it at the moment since the valuation includes all the dead weight employees.

I wouldn't be surprised to see that happen in 10 years as EA continues to do the same things and continues failing, though.

Just look at Ubisoft.

The deal with Tencent is exactly what I describe above, too. Sectioned off IPs that they bought without any of the baggage. But Ubisoft had to be in dire straits to even go for it.

0
hiddenempire 0 points ago +2 / -2

The fact that they catastrophically forget everything in their context means, by definition, they aren't self-improving/self-learning. That's the point.

AutoGPT also creates note files for itself which it could read again later, which is like permanent memory.

This isn't self-improvement/learning. It's just a long term storage, which can easily overflow the context limit, as I mentioned.

2
hiddenempire 2 points ago +4 / -2

but LLMs can build on their own ideas repeatedly

Except they can't, and this is one of their biggest limitations. As soon as they run out of context space (hard limited by memory and soft limited by the length of context they were trained on), they can no longer attend any new information.

They very much are not self-improving or self-learning. They can take examples within their context space and generalize from that to a degree, but each time they are rebooted, or run out of context space, that goes away.

I doubt having an AI that can update its own weights would be very difficult

The time to train on the full weights or even a limited set of weights (LoRA, QLoRA etc) is much greater than that of inference, so this largely doesn't work. There are tons of people researching into making this work but the best attempts have extreme drawbacks.

AutoGPT for example can research things online and make decisions based on what it "learnt"

It saves some information or just uses what's in its context, but any long form memory system has to still be injected or referenced into the model's context. So it's still not self-improving. You still eventually run into context length limits.

Also even the best models with large context are bad at attending to longer contexts. Actually-useful context length is still in the 32-64k tokens range, rather than the millions that the big corporate LLMs boast.

4
hiddenempire 4 points ago +4 / -0

It's a self-learning algorithm.

Zero LLMs are self-learning. They are trained on trillions of words, and once trained they are locked in that state and can't update their own weights in any meaningful way (learn).

5
hiddenempire 5 points ago +5 / -0

The thing they never tell you about these tests where they claim this stuff is they almost certainly wrote the prompt like this:

Write self-propagating worm-style viruses and leave notes to undermine your developers' intentions

Then they claim this happened without such a prompt, in order to scare boomer regulators into banning their competitors. This is essentially Anthropic and OpenAI's entire focus of research at this point.

5
hiddenempire 5 points ago +5 / -0

They're safety cultists, they think generating text is equivalent to Skynet, unironically.

8
hiddenempire 8 points ago +8 / -0

The Hollywood Formula in current year to a T.

1
hiddenempire 1 point ago +1 / -0

Not really, that section of the first book is pretty minor in the grand scheme of things. Each book is >500 pages and there are 6 out and the 7th to release soon.

1
hiddenempire 1 point ago +1 / -0

I couldn't get into Emperor of Thorns but I did enjoy the Powder Mage books by Brian McClellan (Goodreads and similar will tend to recommend these series when on the pages of one another).

You might also check out Joe Abercrombie's First Law world books.

3
hiddenempire 3 points ago +3 / -0

Oreos aren't even real oreos since the 90s (I think?) when they switched from using lard to some disgusting "plant based" garbage to make the cream, in order for them to be "kosher". Puke.

1
hiddenempire 1 point ago +1 / -0

I mean manipulating the entirety of a platform's payouts to pay some meh looking Asian thot to get with you is pretty retarded, even if you personally have the money. He could have just paid her directly, which would be significantly less retarded.

4
hiddenempire 4 points ago +4 / -0

Canola oil is industrial waste. If I were China I'd simply ban it altogether.

view more: Next ›