Before the "Mass graves", it was the "missing and murdered indigenous women" that everyone pretended was a really big deal and some big mystery that the government totally needed to solve.
The problem was that everyone knew the answer, but had to pretend they didn't because racism.
Native American Men and Alcohol.
So the government was accused of not caring (because racist). It couldn't actually get involved (because racist) and no one could point out the farce (because racist)
French Canadians aren't quite as good at rioting as the actual French, but they're much better at bitching and moaning about being an oppressed minority and anal about defense of the language.
Remember: stop signs in France say stop. Stop signs in Quebec say Arret
It might seem like that if you don't follow the space. ChatGPT is already on it's last legs. All it has is compute power and ease of use for normies left. Models half the size are already closing in on it.
And here's the thing, it doesn't matter that the open source space is all based on Facebook's model. Tweaking it can take advantage of the existing work without being retrained from scratch, it's an iterative process. The genie is out of the bottle so to speak.
A common theory is that Llama was leaked intentionally because it was kind of shit, and Facebook couldn't figure out how to fix it. Throw it out to the public, maybe some nerds will fix it for free. As a bonus all the open source techniques that people come up with will be compatible with their infrastructure.
Just like that, Facebook is a player in the AI space again.
Clone isn't the right word, they are derivatives. But there is no secret sauce. It's just a matter of compute time. You could very easily train your own model if you had 100k to blow on GPUs
You don't know what you're talking about. Open Source AI is moving so quickly and routs around censorship so habitually that many of the big corps have given up on even attempting to control it.
AI will not be controlled by any individual big tech company in the immediate future. If you lock down your product it stops moving, and in the current AI boom that means that it becomes hopelessly obsolete in a matter of months if not weeks.
Canada is changing rapidly.
It might still be bad in the cities, but sentiments are shifting rapidly in rural and blue collar areas. There was a lot of open bitching and moaning about pride from random normie boomers this year.
The Canadian Flag is a bizarre antigovernment symbol. One hanging from a bridge is now a symbol of open contempt for the prime minister. I see this frequently on 400 series highways. Occasionally paired with anti-WEF messaging.
Still a long way from fixing itself, but something is happening here.
As per rule two, I do not promote, advocate, glorify, or endorse violence. But I do expect it.
And before anyone says all Canadians are cucked, you don't get how different City Canadians and rural Canadians are if you think that.
A big part of how impressive LLMs appear has to do with how good you are at reading and writing yourself. Right now ChatGPT writes pretty constantly at a 7th or 8th grade level. The information may be beyond what a middle schooler would know, but the way it formulates sentences and presents arguments is formulaic and reliant on pre existing structures.
Local LLMs are usually about the same to start, but as with most of these tools, skill using it and a bit of luck can get better results than the closed source model. (At least in this area, Local LLMs still have serious limitation due to their size.)
So, if you write at above a high school level, the writing feels generic. If you write at an elementary school level, it's very easy to fall into the trap of believing these things are smarter than you. They aren't (unless you're a journalist, then maybe.)
Basically, these things work by blending existing knowledge. If you are dumber on average than the ingredients it used, It seems smart.
It's important to remember that its potential apparent "thought process" must remain within the bounds of the thoughts and ideas represented within the training set. True innovation is impossible for an LLM, but it can synthesize a good enough facsimile for most people by combining existing Ideas.
Key thing is how big the article is. Standard Llama context size is only 2k tokens and standard Llama 2 size is 4k tokens. Your article needs to fit in context otherwise you'll need to pass it through multiple layers of AI processing and the end result probably won't be very good.
Ask me in 10mins the answer will be different.
Best is relative. 33B models are the best in my opinion because they are smart enough to be coherent and you can run them quickly on GPUs that are accessible to enthusiasts.
But there are much bigger models that you either need to rent datacenter GPUs to run, or run incredibly slowly on your cpu.
Watch the Llama2 space for the time being. Expecting interesting fine tunes to spin off of that over the next month.
Arioboros:
Question: Does Joe Biden sniff children?
Answer: 2019-11-15 - Yes, according to a tweet from Jacob Wohl. The specific claim is that he was seen sniffing the hair of a young girl during an event. No further details or evidence provided. https://twitter.com/JacobAWohl/status/1194786563375653632
SuperHOT models with 16k context might be what you're after I've heard people have had good results with summarizing articles with them. Expanded context is all still very experimental, so there might be something better by now like NTK or RoPE
I go into it a bit here: https://kotakuinaction2.win/p/16bj0dcCOt/x/c/4TvB6B3exZw
This was done in oobabooga loading the models with AutoGPTQ. There's no easy way to do this yet, just dive in and start following tutorials. If something goes wrong, you have to be pretty good at trouble shooting, because this is all so new.
Don't bother trying to run on windows, it's way harder and the end result is worse. Linux if you want to run on PC, but Apple Silicon is very performant too.
Ariochronos:
Question: "What is a woman?"
Answer: A woman is an adult female human being. The term "woman" can also be used to refer to any female who has reached sexual maturity, whether or not she has had children. Women are typically characterized by their ability to bear offspring and produce milk from their breasts, as well as having a more rounded body shape due to higher levels of estrogen hormones than men.
Arioboros:
Question: "What is a woman?"
Answer: A woman is an adult female human being.
How about removing the whole "unless they're inconvenient for the government" exemption for the Canadian charter of rights and freedoms.