874
submitted 1 month ago* (last edited 1 month ago) by neme@lemm.ee to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] brucethemoose@lemmy.world 247 points 1 month ago* (last edited 1 month ago)

It requires them to restrict certain categories of video, so that users cannot share content on cyberbullying, promoting eating disorders, promotion of self harm or incitement to hatred on a number of grounds.

Wow, what a horrible, restraining overreach.

I am shedding tears for the 1.2% engagement loss this would cost Reddit next quarter. Imagine what they have to pay devs for filtering abusive videos!

(I hate to sound so salty, but its mind boggling that they would fight this so vehemently, instead of just... filtering abusive content? Which they already do for anything that actually costs them any profit).

[-] Lost_My_Mind@lemmy.world 85 points 1 month ago

Well......the problem is reddit's size.

I'm not part of reddit anymore because they filtered me out for abusive content.

The content that was so abusive? I told a story on /r/Cleveland about the time 35 years ago I got my bike stolen.

I wasn't accusing any current reddit user of being the theif. But reddit bots flagged me of being abusive to other users.

We don't even know if that guy who stole my bike 35 years ago is even still alive, much less an active redditor on /r/Cleveland. So who am I being abusive to, when I say it's a bad idea to let strangers ride your bike without some kind of assurance you'll get it back?

[-] GreenKnight23@lemmy.world 94 points 1 month ago

I got banned when I told a literal Nazi, that said that literal Jews should die, should drink bleach to purify their genes before they contaminated the genepool.

I still stand by it. my grandfather fucked up Nazis, and I'll fuck up Nazis too.

[-] 100@fedia.io 27 points 1 month ago
[-] spector@lemmy.ca 12 points 1 month ago

This is a common tactic. I've seen people describe the same process many times before.

  1. Nazi says literal Nazi shit.
  2. Person gets baited into responding.
  3. Person gets ban hammer. Nazi does not.
  4. Nazi moves on to next target. Repeat from step 1.

They usually trot this out when they see a comment or account they want to silence. That's how the fascists do censorship on reddit.

It's happened to me too. Since then I've seen people saying the same general thing has happened to them. They must know that reddits content moderators, the "Anti-evil Operations" or whatever bullshit, is on their side. It's the only explanation. Probably the nazis went and got jobs there. Or maybe it's just that spez is a nazi himself. Reddit beneath the thin veneer of default subreddits has always been a very right leaning platform.

[-] SendMePhotos@lemmy.world 6 points 1 month ago

Anarchy in the US baby!

[-] brucethemoose@lemmy.world 29 points 1 month ago* (last edited 1 month ago)

Fair. +1

But also, that just sounds like they're cheaping out on content filtering. And, you know, kinda broke the enthusiastic community moderation that made it great in the first place.

[-] Lost_My_Mind@lemmy.world 20 points 1 month ago

Yes, that's true. This all happened like 3 weeks after they went public IPO. I didn't buy it, because I thought reddit had a decent chance of falling on it's ass on the free market. It's a 10+ year old company that's never made a profit. It's reasonable to assume it might fail.

3 weeks after I declined, and they went public, I suddenly get 3 temporary bans in a week, and the 3rd one was a permanent ban. All by autobots.

[-] Dead_or_Alive@lemmy.world 11 points 1 month ago

Yeah same here, the last post I made was to argue for more disabled access to European historical sites n the r/europe subreddit.

After everything I’ve posted, THAT is what got me banned.

After loosing my appeal, I changed all my prior posts to AI generated gibberish.

Fuck Reddit, salt your posts so they can’t use your content to make money on search or train AI.

[-] Lost_My_Mind@lemmy.world 7 points 1 month ago

I wish I could, but I have hundreds of thousands if not millions of comments.

Look at my time here, and now look at how many comments I have here, and know that I am running at MAYBE 5% of my posting capacity.

Lemmy just is barren of content if you don't care about politics, linux, or star trek.

[-] TheBat@lemmy.world 2 points 1 month ago

There are some extensions that can edit your comments automatically.

[-] lemonmelon@lemmy.world 3 points 1 month ago

Decepticons probably wouldn't be any more permissive, though.

[-] UnderpantsWeevil@lemmy.world 16 points 1 month ago

Well…the problem is reddit’s size.

They've never been shy about targeting certain subs and communities for shutdown when it suits their commercial interests. This has nothing to do with size and everything to do with the nature of the content itself.

These videos are pure clickbait. They feed engagement. They build up lots of enthusiasm both among content providers and active users. And, as a consequence, they make the company money.

But reddit bots flagged me of being abusive to other users.

Bots will flag any post purely based on keyword searches and AI parsing of sentiment. Its got nothing to do with your actual statement. But it also depends heavily on who you are, where you post, and how often other users flag you. Very possibly you simply got "Report" flagged a bunch of times by other users for some reason and that - plus a naive parsing - was all the AI bot needed to know.

But I'll also bet the post wasn't getting thousands of unique interactions and external visits. If you'd been a power-poster who was posting a face-cam rant rather than a text blob, I suspect you'd have been fine.

[-] rozodru@lemmy.world 13 points 1 month ago

same with me and /r/Toronto got banned for stating a long dead prime minister was horrible to indigenous people. they used the excuse that I was submitting too many articles about crimes in the city as that subreddit's mods automatically remove any content about crime or pro Palestinian content.

Post god knows how many photos of the CN tower, the fucking sun setting or snow...hey that's great! anything that's news worthy and potentially paints the city in a bad light? nope, censored. It's so bad that i'm convinced the mods there are being paid under the table by the City.

[-] Flocklesscrow@lemm.ee 6 points 1 month ago

The problem is Reddit's CEO. Fullstop

[-] bulwark@lemmy.world 4 points 1 month ago

I hate Reddit as much as the next guy but that just sounds like an asshole mod

[-] Lost_My_Mind@lemmy.world 6 points 1 month ago

In 3 different unrelated subs, and all said to be performed as an automated action?

Also of note, I got permabanned on May 7th.

May 4th I joined Mastadon. Using the same email as my reddit email.

[-] fluxion@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

I'm gonna have to ask you to stop abusing whatever random reddit mod flagged you back then in case they might be here. Or else.

[-] chalupapocalypse@lemmy.world 17 points 1 month ago

They would have to hire a shitload of people to police it all along with the rest of the questionable shit on there, like jailbait or whatever other shit they turned a blind eye to until it showed up on the news

Not saying it's right but from a business standpoint it makes sense

[-] brucethemoose@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

Don't they flag stuff automatically?

Not sure what they're using on the backend, but open source LLMs that take image inputs are good now. Like, they can read garbled text from a meme and interpret it with context, easily. And this is apparently a field thats been refined over years due to the legal need for CSAM detection anyway.

[-] T156@lemmy.world 2 points 1 month ago

They do, but they'd still need someone to go through the flagging and check. Reddit gets away with it as it is like Facebook groups do, by offloading the moderation to users, with the admins only being roped in for ostensibly big things like ban evasion/site wide bans, or lately, if the moderators don't toe the company line exactly.

I doubt that they would use an LLM for that. That's very expensive and slow, especially for the volume of images that they would need to process. Existing CSAM detectors aren't as expensive, and are faster. They basically compute a hash for the image, and compare it to known hashes for CSAM.

[-] brucethemoose@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Small LLMs are quite fast these days, even the multimodal ones. Same with small models explicitly used to filter diffusion output.

[-] ripcord@lemmy.world 0 points 1 month ago

A shitload of people, like as many as 10!

[-] GeneralInterest@lemmy.world 8 points 1 month ago

I hate to sound so salty, but its mind boggling that they would fight this so vehemently, instead of just… filtering abusive content?

I guess it's just enshittification. Profits are their first priority.

[-] reddig33@lemmy.world 2 points 1 month ago

I wonder what the investors like Condé Nast/Advance Publications think of this?

[-] Kaboom@reddthat.com 1 points 1 month ago

Probably would require them to actually pay moderators! The horror!

this post was submitted on 21 Oct 2024
874 points (98.8% liked)

Technology

59740 readers
2484 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS