“If you’ve ever hosted a potluck and none of the guests were spouting antisemitic and/or authoritarian talking points, congratulations! You’ve achieved what some of the most valuable companies in the world claim is impossible.”
Just want to take a moment to say…
Thank you, moderators. Sincerely.
There is one complaint that I have about the mods across Lemmy, they seem to be hesitant to crack down on trolling. This has, in turn, made trolling easy thanks to the audience Lemmy attracted. Love the mods here, but when someone calls out a troll, maybe don’t remove the comment calling out the troll and leave the troll alone to continue trolling. Your contribution is actively negative if you do this.
Defining “trolling” is complex.
People have unpopular opinions that they vehemently defend… it’s not always bait or just being an asshole for the fun of it.
The difference between “notice me, I’m an attention-seeking asshole” and “notice me, my opinion is unpopular” can be difficult to discern, but becomes obvious with context. Most especially when you both point them out and provide the context. As an attention-seeking asshole, it’s a bit frustrating to be told that I can’t recognize someone doing what I do, but far less subtly.
I think just the nature of social media in and of itself creates “trolls”…
Everyone seems to think WE all know everything about everything and we just can’t resist the compulsion to tell and “educate” others .
That’s precisely why it can be frustrating to attempt to save someone the headache of engaging with a person speaking in bad faith, only to have the warning removed and to be told “don’t do that again, you’re being rude.” There are quite a few naturally occurring issues with social media, not least of which are trolls, and to hand wave them does little to improve the situation.
Look, all I’m saying is that if I see a streak of people writing dissertations in reply to a visibly disingenuous commenter, my warning might be worth keeping.
Lol… I don’t even know what to say to you anymore…
It basically goes back to what I said earlier… People define things differently and believe and are passionate about different things.
“Bad faith” is something that people may not always agree on and could be considered subjective, too.
Cool though… I guess it kind of proves my point actually.
Dude, it’s accounts named “boomer opinions” or “communist git” roleplaying as racists and extermination enthusiasts. I understand your point, you just lack context and a relevant point. Had you asked rather than affirmed, you might have had both.
The mods are trolls too. don’t kid yourself. I’ve had several less than amazing mod interactions. There’s even less here on Lemmy to prevent mod abuse, and it’s absolutely already rampant.
One doesn’t need to browse the modlog of instances like lemmy.ml for long to realize there’s still a lot to be improved on. Mods and admins can just delete your messages and ban you for literally what ever reason and there’s nothing you can do about it.
One could argue that doesn’t matter on Lemmy because you can just move to another instance, but considering how much complaining there’s about twitter here every single day here I’m not sure if those people actually believe what they’re saying.
That’s certainly part of what made reddit get so shitty too.
deleted by creator
Rebecca Watson had a nice breakdown of how Wikipedia avoided this:
https://www.youtube.com/watch?v=U9lGAf91HNA
Basically they nipped that shit in the bud and didn’t allow it to take root and the Nazis gave up. The ol’ anecdote of kicking the polite Nazi out of the bar so it does not become a Nazi bar holds up.
deleted
The only time that a corporation will take action is when it impacts profits.
If the NAZIs drive more profits than they lose, they will stay. It’s as simple as that.
Any corporate social conscience is just a show. Post some rainbows and say that shutting down NAZI is too difficult, but don’t do anything that might reduce the profits that the hate controversies create.
Non-profit platforms like Lemmy can do what is right vs what is profitable
Most advertisers don’t really want their ads to be shown alongside Nazi content. One thing users can do is to send the advertisers’ PR contacts a screenshot of their ad beside someone calling for racial holy war. “Hey, I’m not buying your beans any more because you advertise on Nazi shit” is a pretty clear message.
Anyone that spent a lot of time on the better subreddits knew that already.
edit: I forgot to add - this is probably why Huffman did what he did on reddit - they know it’s perfectly possible to properly mod online communities… they don’t want them properly modded.
Any community that welcomes bigots is truly welcoming only to bigots.
Any civility rule that is enforced with greater priority than (or in the absence of) a “no bigotry” rule serves only to protect bigots from decent people.
Bigots already have too many places where they are welcome and protected. I’m glad that lemmy (with the exception of certain instances that are largely defederated) has not fallen into the trap that defines too much of social media.
Any civility rule that is enforced with greater priority than (or in the absence of) a “no bigotry” rule serves only to protect bigots from decent people.
There’s a saying I think about a lot that goes “The problem with rules is that good people don’t need 'em, and bad people will find a way around 'em”.
The best thing about human volunteer mods vs automated tools or paid “trust and safety” teams, IMO, is that volunteer humans can better identify when someone is participating in the spirit of a community, because the mods themselves are usually members of the community too.
Easy my ass. It takes an insane amount of man hours to moderate large platforms.
The key word here is “large”. From the article:
“[Fediverse] instances don’t generally have any unwanted guests because there’s zero incentive to grow beyond an ability to self-moderate. If an instance were to become known for hosting Nazis —either via malice or an incompetent owner— other more responsible instances would simply de-federate (cut themselves off) from the Nazi instance until they got their shit together. Problem solved, no ‘trust and safety’ required”
we have more mods per capita that the corpo hellsites and we don’t even have venture capital money funding us
Corporations don’t aggressively moderate and ban Nazis on their platforms because it would measurably negative affect their MAU stats, which is one of the primary metrics social media corps report on how “good” (read: profitable) their social network platform is.
Meta et al. will NEVER intentionally remove users that push engagement numbers up (regardless of how or what topics are being engaged) unless:
- they determine it’s more profitable/less harmful to their business to do so
- they are forced to by a court order
Which is another way the fediverse is better: The success metric is a vibrant, happy community, not MAUs or engagement numbers, so they make decisions accordingly.
Not to mention that because the fediverse doesn’t require the collection of analytics it is less expensive to run. Most of the servers at Facebook are used to gather, sift, and deliver usage metrics. Actually serving content is a cheap and largely solved problem.
The success metric is a vibrant, happy community, not MAUs or engagement numbers, so they make decisions accordingly.
YES well said. An instance is measured by it’s quality, not it’s profitability.
Twitter has always encouraged gawking at horrible behavior, and its culture has norms like “ratio” which promote “bad examples” so that they can be publicly shamed.
Let’s not be like Twitter.
Well, an instance can choose to behave like twitter, but everyone else can federate with them or not at their discretion.
I’m so sorry to all the thoughtless Fanboys out here, but this is such a disingenuous fluff piece it doesn’t even deserve discussion
There’s plenty of Nazis in the Fediverse, just not on any instances your instances are federated with.
Exactly. It was easy to partition them off.
In my potlucks’ favor though basically everyone who attends them is a member of a group targeted by Nazis.
Those already in economic power have gained enough means to manipulate the rules and Fascism is more profitable for people already in power than even ‘normal’ capitalism is. This was basically preordained for as long as profit uber alles.
I think its a numbers game. If fediverse had the numbers it would be plagued with all the same issues. But its a little fish in a big pond.
If a Fediverse instance grew so big that it couldn’t moderate itself and had a lot of spam/Nazis, presumably other instances would just defederate, yeah? Unless an instance is ad-supported, what’s the incentive to grow beyond one’s ability to stay under control?
deleted
questionable pictures
We need to keep distinguishing “actual, real-life child-abuse material” from “weird/icky porn”. Fediverse services have been used to distribute both, but they represent really different classes of problem.
Real-life CSAM is illegal to possess. If someone posts it on an instance you own, you have a legal problem. It is an actual real-life threat to your freedom and the freedom of your other users.
Weird/icky porn is not typically illegal, but it’s something many people don’t want to support or be associated with. Instance owners have a right to say “I don’t want my instance used to host weird/icky porn.” Other instance owners can say “I quite like the porn that you find weird/icky, please post it over here!”
Real-life CSAM is not just extremely weird/icky porn. It is a whole different level of problem, because it is a live threat to anyone who gets it on their computer.
No, let’s just say both are fucking creepy and not allow either thanks. Your desire to draw a line between them is sus also.
You’d be surprised by how much of the Internet was built by furries, BDSM folk, and other people whose porn a lot of folks think is weird and icky.
Also, you seem to have misunderstood the gist of my comment, or I wasn’t clear enough. The tools to deal with CSAM will of necessity be a lot stronger than content moderation that’s driven by users’ preferences of what they’d like not to see.
The issue is your categorization, and either rhe rhought, or lack of thought, that went into making them: “real csam”, and “the icky stuff”
When you categorized the first as “real” it leaves a gap for the rest of “fake” and “implied” CSAM, which me, the reader, is left assuming goes in your other category, especially since your other category has no specifics, and we all know what CSAM is.That was the logic behind my comment:
“If somebody is tiptoeing around abusive material it’s because they want to view abusive material.”
Also I find it suspect that you’ve characterized the issue with CSAM material being that you can get in trouble for owning it, not that it wrecks somebody’s fucking life to make…
Honestly I think you would be better off deleting your comment completely. White knighting the term “questionable pictures” in a public forum isn’t a good look regardless of what you meant.
I’m talking about the necessities of moderation policy.
The things you think it’s “suspect” I’m not saying? Those are things I think are obviously true and don’t need to be restated. Yes, child abuse is very bad. We know that. I don’t need to say it over again, because everyone already knows it. I’m talking specifically about the needs for moderation here.
I’m pointing at the necessary distinction between “you personally morally object to that material” and “that material will cause the law to come down on you and your users and anyone who peers with you”.
You should have the ability to keep both of those off your server, but the latter is way more critical.
“White knighting”? Delete your account.
How is this news?
It’s literally a blog post.
Confirmation bias is how, let’s be real.
It appears the community isn’t exclusively for news.
The sidebar agrees with you:
This is a most excellent place for technology news and articles.
(who’s account isn’t banned within a few hours),
Whose
And that’s where I stopped.
I look at that as as proof it wasn’t written by GPT.