38
Moderation Philosophy - On Content Removal
(docs.beehaw.org)
Support and meta community for Beehaw. Ask your questions about the community, technical issues, and other such things here.
A brief FAQ for lurkers and new users can be found here.
Our September 2024 financial update is here.
For a refresher on our philosophy, see also What is Beehaw?, The spirit of the rules, and Beehaw is a Community
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
What about misinformation?
Without downvotes it will slowly bubble up to the top because the only barrier is finding enough people gullable or ignorant (precisely, not demeaning) enough to believe it. Or if it's "pop culture misinformation", it rises to the top by virtue of it being popular misinformation.
Both of those are not ideal for quality conten, or fact based discussion and debate when vote counts exist. As more often than not more votes = more true to a layman.
We've seen this on any other platform that has "the only direction is up" mechanics, because the only direction is up.
Another risk is promoting misinformed communities, who find comfort in each other because their shared, incorrect, opinions of what should be fact based truths find common ground. I don't think those are the kinds of communities beehaw wants. Thankfully community creation is heavily managed, which may mitigation or remove such risks entirely.
What I'm getting at is what will the stance be here? If beehaw starts fostering anti-intellectualism, will that be allowed to grow and fester? It's an insidious kind of toxicity that appears benign, till it's not.
To be clear I'm not saying these things exist or will exist on beehaw in a significant capacity. I am stating a theoretical based on the truth that there is always a subset of your population that are misinformed and will believe and spread misinformation, and some of that subset will defend those views vehemently and illogically.
I would hate to see that grow in a place that appears to have all the quality characteristics I have been looking for in a community.
The lowest common denominator of social media will always push to normalize all other forms and communities. It's like a social osmosis. Most communities on places like Reddit failed to combat and avoid such osmosis. Will beehaw avoid such osmosis over time?
Most misinformation is poorly veiled hate speech and as of such it would be removed. Down votes don't change how visible it is, or how much it's spread. You deal with misinformation by removing it and banning repeat offenders/spreaders.
I would argue that only a subset of misinformation of veiled hate speech. The rest, and majority, are misinformed individuals repeating/regurgitating their inherited misinformation.
There is definitely some hate speech veiled as misinformation, I'm not arguing against that. My argument is that's not the majority. There are severity scales of misinformation, with hate speech being near the top, and mundane conversational, every day, transient factual incorrectness being near the bottom.
There exists between those two a range of unacceptable misinformation that should be considered.
A consequence of not considering or recognizing it is a lack of respect for the problem. Which leads to the problem existing unopposed.
I don't have a solution here since this is a broad & sticky problem and moderating misinformation is an incredibly difficult thing to do. Identifying and categorizing the levels you care about and the potential methods to mitigate it (whether you can or can't employ those is another problem) should, in my opinion, be on the radar.
If you're volunteering to take it on, feel free to put together a plan. Until then you'll have to trust that we're trying to moderate within scope of the tools we have and the size of our platform, but we're still human and don't catch everything. Please report any misinformation you see.
Maybe my edit was too late! I did not communicate my objective clearly and edited my comment to reflect that.
I'm not proposing you solve misinformation, but rather that you recognize it as more than you stated, and respect the problem. That's the first step.
This is not something I can do, it is only something that admins can do in synchrony as a first step. I am doing my part in trying to convince you of that.
Only after that has been achieved can solutions be theorized/probed. Which is something I would happily be part of, and do foot work towards (Though I'm sure there are experts in the community, it's a matter of surfacing them). That's a long term project, which takes a considerable amount of research and time, doing it without first gaining traction on the problem space would be a fools errand.
At the risk of sounding abrasive (I intend no disrespect, just not sure how else to ask this atm), is that understood/clear?
Edit: Want to note that I am actually impressed by the level of engagement community founders have had. It's appreciated.
Yes it's one of many problems with modern social media, no I don't have time right now to elaborate a plan on how to tackle it. Something on this subject will likely come much further in the future but right now I'm focused mostly on creating the docs necessary for people to understand our ethos more when I'm not busy living my life.
An excellent example of very sneaky misinformation was an article in the Guardian the other day, which kept talking about 700,000 immigrants. Since 350,000 of those are foreign students, that is a blatant lie. Foreign students aren't immigrants.