Does corporate give a fuck? It seems they're in the market for an AI that tells HR what they want to hear and that's the one thing that Big Tech excels at making.
Geologists’ Recommendation: Geologists at UC Berkeley recommend eating at least one small rock per day, citing the importance of sediment in a balanced diet. However, this is likely a tongue-in-cheek recommendation, and not a serious suggestion.
They actually have the Onion article in the sources at the bottom and they at least note that it's likely not serious.
I believe it stems from the google AI recommending eating 1 rock per day which is obvious nonsense. Someone pointed out that it was an Onion headline.
I don't believe it's confirmed but if I lose a ring down my toilet and it appears in my "spring water" bottle I'm going to have some questions.
it looks like when the news hit, they quickly adjusted it manually. there are screenshots of that stuff, and it's heavily in autocomplete. but i don't get their result.
and going into gemini, it's now identifying TO as a satire site. i asked it a couple of similar style questions and it openly said it was a spoof from TO.
I've used the AI before, it's actually somewhat useful for finding information about stuff, and for my industry that can be hard to find online. Definitely not worth however much cash they've burned on developing it though.
I dont wanna dox you, but I wanna know what this AI is good at.
I always kinda figured my carefully keyworded searches are going to pull up the exact same thing is AI. I just leave out all the useless words. People tend to ask AI a question. If that question is asked an answered on the internet, then all the AI is doing is reformatting it for me. I can type fewer words into the search box and find the same page.
It's found some industrial stuff for me before that tends to get buried under all the shops that want to sell it to me. Probably only a little more useful than a normal search so definitely not worth using often.
Sadly, that's not found it's way into general information management yet (That IS NEXT people!) and so we have a great program with rubbish to work with.
So the question today is the same as it's always been in the free world; who watched the watchers?
This has nothing to do with model training, this is just the model deciding The Onion was the most credible result for that question. The guy who made this post on Gab is trying to sound smarter than he is.
The model doesn't decide anything by itself it's programmed to from online sources which is what the learning is about and then it grabs things based on keywords entered by the end user. Possibly the biggest mistake programmers have made with this is calling machine learning, machine learning. As it implies sentience where there is none because you normies are taking abstract descriptions/terminology literally which makes having any real discussion about AI extremely painful to watch. You won't be told though, even when you have people who actually study this trying to explain.
Pajeets don't understand satire. Who knew?
Can't find a reliable reference that this is actually true.
If true, it would erode corporate confidence in Gemini.
Very, very bad news for Google.
Does corporate give a fuck? It seems they're in the market for an AI that tells HR what they want to hear and that's the one thing that Big Tech excels at making.
I don't know how to get Google AI results (don't see them when I search here).
On Brave (https://search.brave.com/search?q=how+many+rocks+should+i+eat+each+day) I get this among their AI generated points:
They actually have the Onion article in the sources at the bottom and they at least note that it's likely not serious.
All LLMs are fed on terabytes of text.
A significant portion of that is lies and fantasy.
They can clearly learn the difference.
Naturally, contemporary academia is one of the primary inputs.
So is reddit and facebook.
One wonders if garbage-in-garbage-out, or if the AI can pick through it all and learn.
I imagine it has like a reliability ranking on sources. Maybe looks at the number of upvotes, or your number of friends on FB.
I haven't seen evidence of that.
I believe it stems from the google AI recommending eating 1 rock per day which is obvious nonsense. Someone pointed out that it was an Onion headline. I don't believe it's confirmed but if I lose a ring down my toilet and it appears in my "spring water" bottle I'm going to have some questions.
it looks like when the news hit, they quickly adjusted it manually. there are screenshots of that stuff, and it's heavily in autocomplete. but i don't get their result.
and going into gemini, it's now identifying TO as a satire site. i asked it a couple of similar style questions and it openly said it was a spoof from TO.
Has ANYONE found anything useful from Bing’s LLM generated search results?
It’s invariably fanciful garbage - microsoft isn’t getting their $100bn investment worth.
I've used the AI before, it's actually somewhat useful for finding information about stuff, and for my industry that can be hard to find online. Definitely not worth however much cash they've burned on developing it though.
I dont wanna dox you, but I wanna know what this AI is good at.
I always kinda figured my carefully keyworded searches are going to pull up the exact same thing is AI. I just leave out all the useless words. People tend to ask AI a question. If that question is asked an answered on the internet, then all the AI is doing is reformatting it for me. I can type fewer words into the search box and find the same page.
It's found some industrial stuff for me before that tends to get buried under all the shops that want to sell it to me. Probably only a little more useful than a normal search so definitely not worth using often.
People use Bing?
It's good for the ole gizzard
Google about to lobby for satire being made illegal to avoid consequences of their mistake. lol
It's the age old GIGO rules of programming.
Sadly, that's not found it's way into general information management yet (That IS NEXT people!) and so we have a great program with rubbish to work with.
So the question today is the same as it's always been in the free world; who watched the watchers?
This has nothing to do with model training, this is just the model deciding The Onion was the most credible result for that question. The guy who made this post on Gab is trying to sound smarter than he is.
The model probably shouldn't have been trained in such a way that it would decide that The Onion was ever a credible result.
If you rather meant "relevant result," it still shouldn't have given it as a response if it were being asked a sincere question.
That's on whoever wrote the algorithms that decide which article to pull up for a particular phrase.
"Real models have never been tried!"
The model doesn't decide anything by itself it's programmed to from online sources which is what the learning is about and then it grabs things based on keywords entered by the end user. Possibly the biggest mistake programmers have made with this is calling machine learning, machine learning. As it implies sentience where there is none because you normies are taking abstract descriptions/terminology literally which makes having any real discussion about AI extremely painful to watch. You won't be told though, even when you have people who actually study this trying to explain.