this post was submitted on 23 Feb 2026
119 points (96.9% liked)

Fediverse

40604 readers
403 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, Mbin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)

founded 2 years ago
MODERATORS
 

Vote manipulation is getting more common. Some recent examples:

While the accounts were banned, the malicious voting activity stuck around.

Should admins have the ability to discard votes, and if so, which admins? Should community mods have that ability? Can you think of any ways that tools like this could be abused?

you are viewing a single comment's thread
view the rest of the comments
[–] Blaze@piefed.zip 2 points 3 days ago (1 children)

So with that example: what do the flags do that the content of their posts don’t already communicate?

It warns other users that this commenter may be a bad faith user / troll.

Usually when I encounter a troll, I check their profile to see if they are indeed a troll. The warning saves some time on that, and is accurate the vast majority of the time.

[–] ZombiFrancis@sh.itjust.works 1 points 2 days ago (1 children)

I guess I approach it inversely. I encounter what looks like a troll post and I'll only check profiles when either I am interacting with them, or there's such deep downvoting already I'm just doing a morbid dive into someone's history.

Most of the time though the user just has a deeply downvoted argument but otherwise normal and/or low engagement posts, so they wouldn't be flagged by this.

So I understand that it can save some time with some niche cases.

But I can't help but note that the system seems intentionally blind to targeted harassment, which can be a source, if not cause, of bad faith accounts. (And likely those need different approaches since those are also niche cases themselves.)

And maybe it's all just because of my instance's Local feed, so that's what I see as a prominent problem on Lemmy.

[–] Blaze@piefed.zip 2 points 2 days ago (1 children)

But I can’t help but note that the system seems intentionally blind to targeted harassment, which can be a source, if not cause, of bad faith accounts. (And likely those need different approaches since those are also niche cases themselves.)

If you mean using puppet accounts to massively downvote someone, that's also tracked, but with another tool

[–] ZombiFrancis@sh.itjust.works 2 points 2 days ago

Not necessarily puppet accounts, just brigading in general.

It's the rationale many instances used to defederate hexbear. (Even though iirc hexbear disables downvotes, so they're defederated for users mass posting, usually that hogshit image, instead of mass voting.) It wasn't puppets or bot accounts at any rate.

But then there's repost communities where users share comments (especially in places they or their audience is banned from) or DMs for a group response.

Not to mention the whole 'block and downvote all .ml on sight' mentality. But hopefully that might be something this tool could catch.