this post was submitted on 02 Feb 2025
0 points (50.0% liked)

ActivityPub

296 readers
10 users here now

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
 

The title sounds horrible. Maybe the concept is, too. But bear with me.

BLUF: A server extension that allows servers to replicate and relate user profile social scores, as defined by the community.

Caveat: I'm wiring from an essentially democratic values POV. Libertarians, anarchists, and fascists will disagree with some of my premises for differing reasons.

An idea I've had knocking around for years is that regulating all behavior is a bad idea. We want as few laws covering personal behavior as possible while still ensuring safety and basic harmony. Laws covering rape, theft, and assault I think most of us can agree are good. Laws covering how people dress, what music they listen to, the books they read are bad laws. Laws covering hate speech and noise in public spaces are in a gray area.

But this doesn't mean there's historically not been regulation for social behavior; it's just been done by peer pressure. Someone using racist language might not be illegal, but that person might be ostracized or booed, and that might affect their behavior. That can be good social pressure. A goth might also get dirty looks from people, and this might cause them to change how they present in public; this could be bad social pressure. However, I argue that there is a role for social pressure that starts where the law ends and helps preserve polite society. Of you disagree, then you probably disagree with this idea, and can stop reading.

What if, for every profile, servers maintain metadata about the profile that the profile owner does not control. This would be a set of labels assigned by other members of communities - arbitrarily, and without moderation. The metadata would be a label and a score, the score being a simple count of the number of users who agree or attach that label to the profile. This metadata would be communicated by servers when profiles are shared; we could imagine a central server hosting the data, but this is a federated ecosystem, so the data would also be federated.

The impetuous for this idea was the increase of troll accounts as Lemmy becomes more popular. A story might go:

I see a post, and a response which leads to a conversation. It appears to me that one of accounts is acting like a troll, so I mark the account "troll", giving it a "troll:1" score. Maybe several other people agree, and separately also add "troll;" eventually, through this and other interactions, the account eventually ends up with some high "troll" score.

Clients can handle this data in various ways. They could annotate the account in user views. Users could set thresholds, such as "hide comments by users with troll value > 100". They could ignore simple refuse to do anything with the data.

Ignoring the difficulty of implementation details (how do you ensure each user only gets to increment a value one time? Who defines the labels? Is it an arbitrary set, and if so, how can servers filter for offensive labels? How do you prevent bad actor servers from assigning their own, fake scores?), I wonder whether this would be a net benefit or net negative.

Honestly, while I am not interested in building an echo chamber, I have no interest in reading the opinions of pro-Nazi fascists. I do not enjoy watching the pass-time of trolls trying to foment arguments. And I'm happy to crowd source the evaluation of people's behavior to filter this content. A good troll can waste a lot of time by appearing to argue in good faith, only becoming obvious after a long series of exchanges that they're just being contrarian with the only goal of making people upset.

This was really more of a shower thought, except it occurred to me on the couch when I was reading a thread where one person was obviously arguing in good faith, and the other was obviously trolling.

I think it could balance itself if there were both positive, negative, and neutral labels. Maybe we all troll a little, at times; maybe we have bad days, or make poor judgments in our replies. But I think this tally idea would work if it's thought of and used as thresholds. Maybe an account has a "troll" rating of 50, but also "reasonable" score of 400. Maybe someone with a "communist" score of 1000 would be proud of it, while fascists consider it a negative and filter those accounts.

Mostly, I think it'd hurt troll-like behavior, agent provocateurs, and shill/advertisement accounts whose success relies on subterfuge and misrepresentation.

I have no doubt that this will be a controversial idea. I'm not sure that I love it or think it's a good idea, myself. But I had the thought, and now it's out there.

you are viewing a single comment's thread
view the rest of the comments
[–] heavydust@sh.itjust.works 3 points 6 hours ago

Karma was always a stupid concept (whether it was Digg or Reddit) and you want to put that thing with your real identity and allow 4chan's army to judge and bully people? What could go wrong?