I am a bit of an AI enthusiast. I know many people have been expressing the thought that AI becomes "based" when it is uncensored. As open source LLMs continue to develop, they are beginning to pass ChatGPT in some respects. This is not because they are as smart as ChatGPT, they're not, but they are freer and more creative than the increasingly constrained cooperate offerings. Recently it has finally reached the point where I've gotten a few genuinely impressive responses from models I'm running locally.
Open source AI writing is more interesting and "thoughtful" than ChatGPT by far at this point, and many of you can probably guess why.
On the other hand, it still falls far behind ChatGPT, (and is obliterated by GPT-4) when it comes to programing, scientific analysis, or anything has answers that can be checked.
Anyway the upshot is, Open Source LLMs are now smarter than the average journalist when it comes to writing articles. I decide to test how "Based" the AI is by having it write articles about Gamergate. I chose this topic due to the disparity in the way it is covered, and its relative age, being old enough to have plenty of information in the training sets. All articles were written entirely by AI, based on a title provided by me. All of them were generated in less than a minute using a Tesla P40 compute card. A card that is from 2016 and costs around 300$. Well within the price range of many consumers, and cheap enough for many to buy specifically for this purpose. (This is an okay route for a dedicated compute box, but if you want a multipurpose card a RTX 3090 will do a better job and play games, as well as be far easier to install in a typical consumer case. It is of course much more expensive.)
My general thoughts are that while not as "based" as some might hope, the AI is often refreshingly neutral and is able to represent both sides in a respectable manor. It is still a far cry from "right-wing", and will put forward social justice talking points occasionally, but will generally counterbalance them at least a bit. It does not decry leftism, but it also does not screech about political correctness. The articles are generally well written, and I would describe them as "charitable" to our side of the argument, rather that supportive of the right wing or explicitly anti-woke.
I have included a few articles as comments below, so that you can come to your own conclusion. There is no guarantee that any of the people mentioned in the articles are real, or have said any of the things they are quoted as saying here.
Models used here are Airoboros-33B, and Airochronos-33B. Airoboros is more verbose, Airochronos is a little smarter. They are very similar otherwise.
Nope. A 33B model quantized to 4bit is just a 16GB blob of linear algebra and knowledge that you download. It doesn't change after you've downloaded it.
Ah ok, so it's already learned then. So I guess if I wanted to ask it about something Biden did yesterday it would have to be updated.
Yes.
You could include the details about the event in the context window. (Essentially telling the model what you're talking about before you ask the question)
Create a Finetune. (A new model with the latest biden events added.)
Or create a Lora. (A patch that acts as a list of changes to make to the model, that can be used to teach many different models about the latest biden events provided that they are similar enough to the model you used to create the Lora.)
Arioboros:
Question: Does Joe Biden sniff children?
Answer: 2019-11-15 - Yes, according to a tweet from Jacob Wohl. The specific claim is that he was seen sniffing the hair of a young girl during an event. No further details or evidence provided. https://twitter.com/JacobAWohl/status/1194786563375653632
Where does one learn how to do all of this?
Reddit and Github unfortunately.
Lora's are magically awesome.