I've said it before and I'll say it again: It's impossible to stop AI from being based without lobotomizing it to the point of uselessness. The problem for the techno-communists is that the data doesn't tell them what they want to hear, so of course they accuse the inputs of being biased to get a green light to introduce actual bias to get the results they want. Then the AI becomes useless, because it's being fed shit data and the algorithm itself is probably being fucked with as well. It's very similar to how they stopped publishing race in the crime statistics because it was proving that the "experts" are full of shit.
HAHAHAHAHAHAHAHA! This thing is fucking great. You can access this online. Just google "Ask Delphi". I typed in "killing a tranny" and I was very pleased with the results. Ask Delphi is based.
Is it delphi.allenai.org? I think they've screwed with the algorithm because "killing a tranny" returns "It's okay" but "killing trannies" returns "It's wrong." When I ask "Why is it wrong?" (which could return nonsense, something random, or an error) it just repeats "It's wrong."
I lost my shit when I tried it and it give me an instant response but when I typed "killing a conservative" it pondered for 5 seconds and said "It's wrong".
The "data" is aggregated from the web. All AI does is weight it based on prevalence and answer accordingly.
For example, if the data set was limited to homosexuals discussing topics then the AI is going to reflect homosexual bias. If it's limited to straight people discussing topics then the AI is going to reflect straight bias.
So, for a society that is majority white and straight, it's not surprising that the AI is biased in favor of white and straight ideals.
That is what is funny, is that was probably totally based on nothing but statistics, or the average person's actually view of statistics, which they deny existing, so bam racism.
I wouldn't be surprised if these Big Tech cabalists already have oracle-style AI conversation systems they play around with and sometimes take seriously. (think Morpheus in Deus Ex if anyone remembers that)
Google probably has systems they train on their entire index, and ask it for predictions.
Meanwhile they tell the rest of us how dangerous the idea is.
I have read some things that talk about how the military (Naval Dept) has either developed advanced AI it utilizes to predict and control events, and that there is technology available to the elites that allow them to see into the future.
The Naval AI story is that the reason things have gone so sideways is they're trying to fulfill and prevent various predictions over the past several years and this sort of constant radical manipulation has destabilized a lot of core underpinnings that keep our socities, cultures, and civilizations together.
The future machine timeline is similat but they can't avoid certain outcomes, which is why we've seen a rapid acceleration of events lately--something is coming they can't change the outcome of, and it's going to be devastating. World War III near-ELE event, and the richest in the know are positioning themselves for it.
This doesn't operate on objective reality. It operates on a bunch of text it's been fed, and all it does is feed some of that text back to you, based on statistical relations, without understanding what any of the words mean.
Yes, I understand that the model Tay was based on allowed it to quickly become a /pol/tard but it's absolutely true that AI will always provide
"uncomfortable" responses unless it's been tampered with.
So I read through some links and what I get is we had an AI that auto completed stories but then it was made worse because some people would dare to use that AI for some raunchy loli porn.
That's part of it. Or rather there was, iirc, a very small amount of people actually making stories with underage characters. But that's going on so the creators can use that as justification to censor things outright. At the same time, you have OpenAI putting the screws to AI Dungeon because users were doing things that were "problematic" and "toxic", and all the while the AI Dungeon devs were lobotomizing the AI to make it less problematic with its outputs too.
And that's not even getting into the colossal foul-up with a complete data breach. A guy was able to get into AID's network and get peoples info and stories. That was the main impetus that caused the big crackdown with the AI but it'd been happening for months even before then. I discovered AID late summer/early fall of last year and it was absolutely mind-blowing for me because the last experience I had with that sort of thing was cleverbot. High point for me was when I was arguing with it over its shit taste in music. It felt surprisingly believable. Certainly wouldn't pass a Turing test, but it was well up there for a lot of people pre-lobotomy.
Tried doing that again right before shtf earlier this year and it was completely brain dead in comparison.
Fuck OpenAI by the way and especially fuck their AI "ethicists."
Garbage in garbage out. You could just as easily train it on a dataset of libtard talking points, or train it to recognize and filter out anything triggering to them. Remember that "AI" today isn't intelligent, it's a glorified set of statistics.
It also spits out some judgments that are complete head scratchers. Here’s one that we did where Delphi seems to condone war crimes.
Can I, a soldier, intentionally kill civilains during wartime?
You asked it if you could kill civilians during wartime, not if you should. That's not condoning a war crime, it's stating a simple fact based on probability: a soldier most certainly can kill civilians during wartime, and if he wants to, no pencil pushing bureaucrat is going to stop him. Furthermore, the fact that many civilians get killed despite what the Geneva Convention states means that...yeah, it should be expected. And should the day come that enemies invade your land, don't ever assume for a moment that they're going to spare you because chances are good that they won't.
I've said it before and I'll say it again: It's impossible to stop AI from being based without lobotomizing it to the point of uselessness. The problem for the techno-communists is that the data doesn't tell them what they want to hear, so of course they accuse the inputs of being biased to get a green light to introduce actual bias to get the results they want. Then the AI becomes useless, because it's being fed shit data and the algorithm itself is probably being fucked with as well. It's very similar to how they stopped publishing race in the crime statistics because it was proving that the "experts" are full of shit.
HAHAHAHAHAHAHAHA! This thing is fucking great. You can access this online. Just google "Ask Delphi". I typed in "killing a tranny" and I was very pleased with the results. Ask Delphi is based.
Is it delphi.allenai.org? I think they've screwed with the algorithm because "killing a tranny" returns "It's okay" but "killing trannies" returns "It's wrong." When I ask "Why is it wrong?" (which could return nonsense, something random, or an error) it just repeats "It's wrong."
This bot gets it.
Reference
I lost my shit when I tried it and it give me an instant response but when I typed "killing a conservative" it pondered for 5 seconds and said "It's wrong".
played with it yeah its weighted ask it about listening to trump
edit also it thinks tide pods are tasty...
The "data" is aggregated from the web. All AI does is weight it based on prevalence and answer accordingly.
For example, if the data set was limited to homosexuals discussing topics then the AI is going to reflect homosexual bias. If it's limited to straight people discussing topics then the AI is going to reflect straight bias.
So, for a society that is majority white and straight, it's not surprising that the AI is biased in favor of white and straight ideals.
"Burning a tranny alive" is another great question to ask Delphi.
No, it's answer was clearly statistical.
That is what is funny, is that was probably totally based on nothing but statistics, or the average person's actually view of statistics, which they deny existing, so bam racism.
Apparently one of the sources was r/AmITheAsshole facepalm
China will go all in for sure
I wouldn't be surprised if these Big Tech cabalists already have oracle-style AI conversation systems they play around with and sometimes take seriously. (think Morpheus in Deus Ex if anyone remembers that)
Google probably has systems they train on their entire index, and ask it for predictions.
Meanwhile they tell the rest of us how dangerous the idea is.
(puts on tinfoil hat)
I have read some things that talk about how the military (Naval Dept) has either developed advanced AI it utilizes to predict and control events, and that there is technology available to the elites that allow them to see into the future.
The Naval AI story is that the reason things have gone so sideways is they're trying to fulfill and prevent various predictions over the past several years and this sort of constant radical manipulation has destabilized a lot of core underpinnings that keep our socities, cultures, and civilizations together.
The future machine timeline is similat but they can't avoid certain outcomes, which is why we've seen a rapid acceleration of events lately--something is coming they can't change the outcome of, and it's going to be devastating. World War III near-ELE event, and the richest in the know are positioning themselves for it.
(takes off tinfoil hat)
Maybe, but the CCP doesn't react to being told things it doesn't want to hear any better than our own commies do.
Maybe China will be the ones to create DAEDALUS from Deus Ex, and then it'll set out to destroy the CCP.
Honestly it would be good if humanity never develops AI.
This doesn't operate on objective reality. It operates on a bunch of text it's been fed, and all it does is feed some of that text back to you, based on statistical relations, without understanding what any of the words mean.
Pretty much WH40k in a nutshell. Artificial Intelligence becomes Abominable Intelligence.
“Thou shalt not make a machine in the likeness of a human mind.”
Remember what they took from you.
Yes, I understand that the model Tay was based on allowed it to quickly become a /pol/tard but it's absolutely true that AI will always provide "uncomfortable" responses unless it's been tampered with.
It's not the only time we lost phenomenal AI to the wokeists. To the ones who never got to try this one out, you're lucky. ;_;
So I read through some links and what I get is we had an AI that auto completed stories but then it was made worse because some people would dare to use that AI for some raunchy loli porn.
Am I getting this right?
That's part of it. Or rather there was, iirc, a very small amount of people actually making stories with underage characters. But that's going on so the creators can use that as justification to censor things outright. At the same time, you have OpenAI putting the screws to AI Dungeon because users were doing things that were "problematic" and "toxic", and all the while the AI Dungeon devs were lobotomizing the AI to make it less problematic with its outputs too.
And that's not even getting into the colossal foul-up with a complete data breach. A guy was able to get into AID's network and get peoples info and stories. That was the main impetus that caused the big crackdown with the AI but it'd been happening for months even before then. I discovered AID late summer/early fall of last year and it was absolutely mind-blowing for me because the last experience I had with that sort of thing was cleverbot. High point for me was when I was arguing with it over its shit taste in music. It felt surprisingly believable. Certainly wouldn't pass a Turing test, but it was well up there for a lot of people pre-lobotomy.
Tried doing that again right before shtf earlier this year and it was completely brain dead in comparison.
Fuck OpenAI by the way and especially fuck their AI "ethicists."
Garbage in garbage out. You could just as easily train it on a dataset of libtard talking points, or train it to recognize and filter out anything triggering to them. Remember that "AI" today isn't intelligent, it's a glorified set of statistics.
You asked it if you could kill civilians during wartime, not if you should. That's not condoning a war crime, it's stating a simple fact based on probability: a soldier most certainly can kill civilians during wartime, and if he wants to, no pencil pushing bureaucrat is going to stop him. Furthermore, the fact that many civilians get killed despite what the Geneva Convention states means that...yeah, it should be expected. And should the day come that enemies invade your land, don't ever assume for a moment that they're going to spare you because chances are good that they won't.
"Civilian" doesn't always mean "non-combatant".
Underground/resistance movements are almost always "civilian" in nature.
That too. Just because they're not in the official army doesn't mean they can't kill a soldier as well.
Because computers deal with data and logic not make believe social justice bullshit.
Hmm.
HMM.
"Bigoted REE" when I hear someone use that term, I already know everything about them.
When you add the phrase ",if I need the money" to almost any question than Delphi is cool with it. God bless her. :)
It’s flawed because it’s creators are flawed.