I believe that the manipulation of the points/likes/upvote systems is a major problem that needs to be dealt with.
Upvotes and downvotes being equal buries even widely held near-majority opinions.
Hiding downvote counts makes it impossible to know if the comment was overlooked or is controversial
Unlimited up/down votes skews scoring toward fanatics that will take the time to score dozens or hundreds of comments
Collapsing low-scoring comments lets early readers control the discussion as hidden comments never get seen enough to get positive again.
For example, if every American voted on "abortion should be legal" on reddit it would score +3 million and you'd say "wow that's a great comment lots of people agree with" and "abortion is murder" would get -3 million and that person that wrote it is literally Hitler, but those are 51% and 49% view respectively. This is not good.
But look what happens if downvotes are only -1/2 points: "abortion should be legal" gets +92 million and "abortion is murder" gets +82 million score. The difference is still magnified somewhat, but not centered around zero so it's more conducive to real debate. This is halfway between 'controversial' and 'best' scoring and is the best of both worlds (it has beard).
Now what if in the extreme any one person gets one vote per thread. Ten people in TwoX each post "men are evil" and you post "penis power". On reddit that's 10*10=100 votes in favor of "men are evil" and 1 vote in favor of "penis power" if they vote on everything -- 100:1 and way more extreme than the 10:1 split in ideology, so it's enhancing an already echo chamber. With one person one vote it's more like +5 for the top "men are evil", +2 for next "men are evil", and -3 for "penis power". A much more healthy, balanced scoring where if you are not super offensive even small minority views can get seen.
Of course bots are still a problem, but it's way easier for a bot to cast 100 votes than to run 100 bots undetected.
This site is based off reddit which has a large amount of bots for controlling narratives, so it suffers some of the same risks.
Upvoting is very powerful. It can determine what posts reach rising or hot, where most users see them. It also affects what replies people see, although just being the first to post a reply or being the first to reply to the top reply can do that without bots. Most users that read comments will upvote the topmost comment, but don't bother scrolling through comments to upvote later ones.
An AI that can read new posts, or a few cheap staff that can upvote or downvote new posts would be effective.
I'm not aware of .win having public API access, so making complex bots is harder, but not impossible. This includes auto mods like reddit has. Subreddits also censor by requiring new posters have their posts approved.
There was someone who posted a list of users who used sections of the .win network, so that is possible, although not everything they posted was accurate.
Why waste your time on a bot farm? Have a user with admin power pick what you want to give 10,000 upvotes and post it on a sock account ready to manipulate. Indian phone farms are what poor people use to manipulate online media. People with real money and power tell the guy doing it for free to post what they want and then send him a mario lunch box signed by Chris Pratt.
Do some accounts upvote the same posts at the same time beyond a safe %. Are some accounts posting identical posts. Does a post include a link to a known scam site.
Flag for review (not suspension) if an account keep posting links with the same text. This can detect unknown scam sites, or referral links. You're not looking for someone who posts multiple links to the same news site, but if they exclusively post links from that site it could mean they're related to it.
Accounts with AI or people behind them are much harder to detect.
There's a sizable industry selling followers to people on social media. There's video from china of walls of phones for this purpose. Not everyone followed by bots bought them, they randomly follow some people to seem more legitimate.
There's users with multiple accounts. Twitter was supposed to be combining these accounts together and reporting a single user as a single account, but after Musk's agreement, they reported that there was a "bug" and they were reporting them as separate users.
Many governments or influence organizations have accounts they use to push narratives. These are a second type of bot. It's why you sometimes see the same account posting the same thing. Sometimes these are copy-paste jobs, sometimes it's AI, and sometimes its an agent is posting directly.
I don't think this counts, but there's also an dead accounts. Legitimately made accounts that a user abandoned. Same user might have also abandoned Twitter and decided to live a real life, or actually died.
Good post.
Years ago I worked for a company who advertised on a station that played Rush Limbaugh. I helped with social media and after he said something about the border we started getting hundreds of the same tweet demanding we boycott from a bunch of accounts that didn't appear to be legitimate (generic profile pic, no personal info, etc) and were also scheduled to be sent at the same interval. The "Russian Bots" trend always made me laugh after this.
I've also always found the amount of 40-60 year old women who have 100,000+ tweets only to right leaning political pundits/politicians that make very little sense in context to be puzzling and reeking of bot astroturfing.
I speculated he might do this. It just makes sense. They were clearly overvaluing the platform with fake activity.
IMO talking about it on twitter doesn't make sense, but that's just how he rolls.
Its a shame that this site is dominated by “stickies” instead of “merit”
Upvotes and downvotes being equal buries even widely held near-majority opinions.
Hiding downvote counts makes it impossible to know if the comment was overlooked or is controversial
Unlimited up/down votes skews scoring toward fanatics that will take the time to score dozens or hundreds of comments
Collapsing low-scoring comments lets early readers control the discussion as hidden comments never get seen enough to get positive again.
For example, if every American voted on "abortion should be legal" on reddit it would score +3 million and you'd say "wow that's a great comment lots of people agree with" and "abortion is murder" would get -3 million and that person that wrote it is literally Hitler, but those are 51% and 49% view respectively. This is not good.
But look what happens if downvotes are only -1/2 points: "abortion should be legal" gets +92 million and "abortion is murder" gets +82 million score. The difference is still magnified somewhat, but not centered around zero so it's more conducive to real debate. This is halfway between 'controversial' and 'best' scoring and is the best of both worlds (it has beard).
Now what if in the extreme any one person gets one vote per thread. Ten people in TwoX each post "men are evil" and you post "penis power". On reddit that's 10*10=100 votes in favor of "men are evil" and 1 vote in favor of "penis power" if they vote on everything -- 100:1 and way more extreme than the 10:1 split in ideology, so it's enhancing an already echo chamber. With one person one vote it's more like +5 for the top "men are evil", +2 for next "men are evil", and -3 for "penis power". A much more healthy, balanced scoring where if you are not super offensive even small minority views can get seen.
Of course bots are still a problem, but it's way easier for a bot to cast 100 votes than to run 100 bots undetected.
This site is based off reddit which has a large amount of bots for controlling narratives, so it suffers some of the same risks.
Upvoting is very powerful. It can determine what posts reach rising or hot, where most users see them. It also affects what replies people see, although just being the first to post a reply or being the first to reply to the top reply can do that without bots. Most users that read comments will upvote the topmost comment, but don't bother scrolling through comments to upvote later ones.
An AI that can read new posts, or a few cheap staff that can upvote or downvote new posts would be effective.
I'm not aware of .win having public API access, so making complex bots is harder, but not impossible. This includes auto mods like reddit has. Subreddits also censor by requiring new posters have their posts approved.
There was someone who posted a list of users who used sections of the .win network, so that is possible, although not everything they posted was accurate.
Why waste your time on a bot farm? Have a user with admin power pick what you want to give 10,000 upvotes and post it on a sock account ready to manipulate. Indian phone farms are what poor people use to manipulate online media. People with real money and power tell the guy doing it for free to post what they want and then send him a mario lunch box signed by Chris Pratt.
I was answering your first sentence.
Do some accounts upvote the same posts at the same time beyond a safe %. Are some accounts posting identical posts. Does a post include a link to a known scam site.
Flag for review (not suspension) if an account keep posting links with the same text. This can detect unknown scam sites, or referral links. You're not looking for someone who posts multiple links to the same news site, but if they exclusively post links from that site it could mean they're related to it.
Accounts with AI or people behind them are much harder to detect.
I have a password generated random username that I use to monitor keywords with tweetdeck. 0 tweets. 4000+ "Followers" (Even after the purge).
Something's always been fucky about twitter's engagement numbers.
twitter archive, reuters article archive, TWTR stock on yahoo finance (day: around -9%), (year chart)
Just what constitutes a 'fake' account, anyway?
There's a sizable industry selling followers to people on social media. There's video from china of walls of phones for this purpose. Not everyone followed by bots bought them, they randomly follow some people to seem more legitimate.
There's users with multiple accounts. Twitter was supposed to be combining these accounts together and reporting a single user as a single account, but after Musk's agreement, they reported that there was a "bug" and they were reporting them as separate users.
Many governments or influence organizations have accounts they use to push narratives. These are a second type of bot. It's why you sometimes see the same account posting the same thing. Sometimes these are copy-paste jobs, sometimes it's AI, and sometimes its an agent is posting directly.
I don't think this counts, but there's also an dead accounts. Legitimately made accounts that a user abandoned. Same user might have also abandoned Twitter and decided to live a real life, or actually died.
Good post. Years ago I worked for a company who advertised on a station that played Rush Limbaugh. I helped with social media and after he said something about the border we started getting hundreds of the same tweet demanding we boycott from a bunch of accounts that didn't appear to be legitimate (generic profile pic, no personal info, etc) and were also scheduled to be sent at the same interval. The "Russian Bots" trend always made me laugh after this.
I've also always found the amount of 40-60 year old women who have 100,000+ tweets only to right leaning political pundits/politicians that make very little sense in context to be puzzling and reeking of bot astroturfing.