1013
submitted 9 months ago by btaf45@lemmy.world to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] Zak@lemmy.world 64 points 9 months ago

I think the design of media products around maximally addictive individually targeted algorithms in combination with content the platform does not control and isn't responsible for is dangerous. Such an algorithm will find the people most susceptible to everything from racist conspiracy theories to eating disorder content and show them more of that. Attempts to moderate away the worst examples of it just result in people making variations that don't technically violate the rules.

With that said, laws made and legal precedents set in response to tragedies are often ill-considered, and I don't like this case. I especially don't like that it includes Reddit, which was not using that type of individualized algorithm to my knowledge.

[-] refurbishedrefurbisher 19 points 9 months ago

This is the real shit right here. The problem is that social media companies' data show that negativity and hate keep people on their website for longer, which means that they view more advertisement compared to positivity.

It is human nature to engage with disagreeable topics moreso than agreeable topics, and social media companies are exploiting that for profit.

We need to regulate algorithms and force them to be open source, so that anybody can audit them. They will try to hide behind "AI" and "trade secret" excuses, but lawmakers have to see above that bullshit.

Unfortunately, US lawmakers are both stupid and corrupt, so it's unlikely that we'll see proper change, and more likely that we'll see shit like "banning all social media from foreign adversaries" when the US-based social media companies are largely the cause of all these problems. I'm sure the US intelligence agencies don't want them to change either, since those companies provide large swaths of personal data to them.

[-] admin@lemmy.my-box.dev 3 points 9 months ago

While this is true for Facebook and YouTube - last time I checked, reddit doesn't personalise feeds in that way. It was my impression that if two people subscribe to the same subreddits, they will see the exact same posts, based on time and upvotes.

Then again, I only ever used third party apps and old.reddit.com, so that might have changed since then.

[-] cophater69@lemm.ee 4 points 9 months ago

Mate, I never got the same homepage twice on my old reddit account. I dunno how you can claim that two people with identical subs would see the same page. That's just patently not true and hasn't been for years.

[-] admin@lemmy.my-box.dev 3 points 9 months ago* (last edited 9 months ago)

Quite simple, aniki. The feeds were ordered by hot, new, or top.

New was ORDER BY date DESC. Top was ORDER BY upvotes DESC. And hot was a slightly more complicated order that used a mixture of upvotes and time.

You can easily verify this by opening 2 different browsers in incognito mode and go to the old reddit frontpage - I get the same results in either. Again - I can't account for the new reddit site because I never used it for more than a few minutes, but that's definitely how they old one worked and still seems to.

[-] deweydecibel@lemmy.world 2 points 9 months ago* (last edited 9 months ago)

It's probably not true anymore, but at the time this guy was being radicalized, you're right, it wasn't algorithmically catered to them. At least not in the sense that it was intentionally exposing them to a specific type of content.

I suppose you can think of the way reddit works (or used to work) as being content agnostic. The algorithm is not aware of the sorts of things it's suggesting to you, it's just showing you things based on subreddit popularity and user voting, regardless of what it is.

In the case of YouTube and Facebook, their algorithms are taking into account the actual content and funneling you towards similar content algorithmically, in a way that is unique to you. Which means at some point their algorithm is acknowledging "this content has problematic elements, let's suggest more problematic content"

(Again, modern reddit, at least on the app, is likely engaging in this now to some degree)

[-] cophater69@lemm.ee 3 points 9 months ago* (last edited 9 months ago)

That's a lot of baseless suppositions you have there. Stuff you cannot possibly know - like how reddit content algos work.

[-] deweydecibel@lemmy.world 5 points 9 months ago

Attempts to moderate away the worst examples of it just result in people making variations that don't technically violate the rules.

The problem then becomes if the clearly defined rules aren't enough, then the people that run these sites need to start making individual judgment calls based on...well, their gut, really. And that creates a lot of issues if the site in question could be held accountable for making a poor call or overlooking something.

The threat of legal repercussions hanging over them is going to make them default to the most strict actions, and that's kind of a problem if there isn't a clear definition of what things need to be actioned against.

[-] rambaroo@lemmynsfw.com 4 points 9 months ago

Bullshit. There's no slippery slope here. You act like these social media companies just stumbled onto algorithms. They didn't, they designed these intentionally to drive engagement up.

Demanding that they change their algorithms to stop intentionally driving negativity and extremism isn't dystopian at all, and it's very frustrating that you think it is. If you choose to do nothing about this issue I promise you we'll be living in a fascist nation within 10 years, and it won't be an accident.

[-] bigMouthCommie@kolektiva.social 1 points 9 months ago

this is exactly why section 230 exists. sites aren't responsible for what other people post and they are allowed to moderate however they want.

[-] VirtualOdour@sh.itjust.works 0 points 9 months ago

It's the chilling effect they use in China, don't make it clear what will get you in trouble and then people are too scared to say anything

Just another group looking to control expression by the back door

[-] rambaroo@lemmynsfw.com 9 points 9 months ago* (last edited 9 months ago)

There's nothing ambiguous about this. Give me a break. We're demanding that social media companies stop deliberately driving negativity and extremism to get clicks. This has fuck all to do with free speech. What they're doing isn't "free speech", it's mass manipulation, and it's very deliberate. And it isn't disclosed to users at any point, which also makes it fraudulent.

It's incredibly ironic that you're accusing people of an effort to control expression when that's literally what social media has been doing since the beginning. They're the ones trying to turn the world into a dystopia, not the other way around.

[-] rambaroo@lemmynsfw.com 3 points 9 months ago

Reddit is the same thing. They intentionally enable and cultivate hostility and bullying there to drive up engagement.

[-] deweydecibel@lemmy.world 2 points 9 months ago

But not algorithmically catered to the individual.

[-] Kalysta@lemmy.world 1 points 9 months ago

Which is even worse because more people see the bullying and hatred, especially when it shows up on a default sub.

this post was submitted on 20 Mar 2024
1013 points (98.0% liked)

Technology

60108 readers
2416 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS