Found this Glorious Wall of text OMEGA-COPE on /r/ottawa
(media.communities.win)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (43)
sorted by:
I don't think the algorithms are doing all of that yet. I think some search results and "trends" are manually manipulated. The only thing social sites seem capable of automating is flagging key words to insert their "get ackshual facts here" disclaimer. Otherwise, they probably rely on jannies and useful idiots to report and effectively ban content from one side of the conversation.
Some people analyzed YouTube recommendations a few years ago.
One analysis charted the number of views required to trend for various channels. Mainstream consistently needed far lower views to trend, like how PBS needed only 10,000.
Another compared the "similar" recommendations be political bias. The amount of left wing content was substantially larger, but it also recommended less to left/center channels than the reverse.
You also have the banning of certain channels or terms. Steven Crowder noticed this, since you couldn't even search for one of his videos unless you typed in the complete name of the video AND his name.
"Authoritative sources" are also almost definitely going to fill the first page of many search results if its related to politics. Same goes for google search.
They even removed publicly viewing downvotes for political reasons.
There was also the "van life" trend, that started being recommended out of nowhere, starting with youtube automatically adding some random girl doing it to people's subscription list, and with only 1 or 2 videos posted to her channel.
What relies on reporting is shadow banning (excluding Reddit, which uses substantially automates this). Banning people for certain actions also relies on reports, but the rules (or interpretations) does primarily rely on the site and employee's political biases. Different handling depending on targeted race of "hate speech" is interpretation of the rules by staff while banning people for misgendering is a rule created under a political bias.
Companies are working on and improving AI to do reporting of their own, such as YouTube automatically creating transcripts, and using them (or titles) to automatically demonetize videos. The breadth of where they target with this tech will only increase, especially as governments also put in their own specialized requirements. Truth Social also uses the same censorship tech twitter does for its twitter alternative.
I understand and agree with all of this. My point was nitpicking "algorithms have been crafted to do..." as a matter of their original intention.