this post was submitted on 13 Feb 2026
-11 points (17.6% liked)

Technology

2238 readers
308 users here now

Tech related news and discussion. Link to anything, it doesn't need to be a news article.

Let's keep the politics and business side of things to a minimum.

Rules

No memes

founded 8 months ago
MODERATORS
 

I know this topic, as well as most online topics, is more about emotions than facts. But here we are. You probably don’t understand how Discord's new policy is intended to help protect children because you aren’t trained to think like a child predator. (That’s fine.) I’ve had to take a lot of child safety trainings over the years for professional reasons. Here’s how online child predators work.

They start by finding a kid with a secret. Just like a lion will generally choose to attack the weak gazelle, child predators go after vulnerable kids.

They find the kids with a secret and say “hey want to see some porn?”, and of course the kid is curious. They didn't start with anything bad. This is a process for them. But they will tell the kid, “be sure you don’t tell your parents about this. This is our secret." Then they slowly try to pull into deeper and deeper secrets and start to blackmail the kid. They start to demand that the kid send them nude photos. They trap the kids into deeper and deeper secrets and guilt to get more and more out of them. In the worst cases this results in meetups with the predator in person.

The easiest places for the predators to start this process is porn sites where the kids are visiting in secret to begin with. Especially those on Discord where the messaging between users is the main feature. Those are the kids that are most vulnerable.

How how is Discord's policy supposed to protect kids? The goal is to keep the vulnerable kids out of spaces where they would be targeted to begin with.

So there you go. I’m all ears for how to do this better. That's one beef I have with the EFF right now. They offer no alternative solutions to this. They just didn't want any kind of protections at all.

top 38 comments
sorted by: hot top controversial new old
[–] baronvonj@piefed.social 2 points 12 minutes ago

When Roblox rolled this out kids were using AI tools to perform the face scan. Kids have easy access to sneak their parents' ID to upload in secret. Predators can use the same tactics. So we haven't gained any security, but some people will be caught up with having their real data made vulnerable.

The only real solution is for the parents/guardians to be engaged and involved in their childrens' lives. That means I have to occasionally invade their privacy and look at their communications, but more importantly it means I have to tell my children about the predators and what kinds of things predators will say to them. It's impossible to child-proof the world around us, we have to world-proof the children themselves.

[–] schwim@piefed.zip 7 points 2 hours ago

That’s one beef I have with the EFF right now. They offer no alternative solutions to this. They just didn’t want any kind of protections at all.

That's because public discourse is saturated by the extreme at both ends of the spectrum. The part of society that feels there could be some solution in the center of the argument is drowned out by Stallman zealots that feel that diddling kids is bad but you shouldn't be kept from doing it if it requires any information on you while the corporate sycophants are crying that they can't protect THE KIDS if they don't have every piece of information on you from birth onwards.

I view it the same as politics. I'll let the others behave as if they have a say in it and argue for or against. I'll just quit using discord when they require verification, as I will/have for any other corporation that does the same. Not because I don't think verification of some kind would help protect everyone from additional risk but because providing my very sensitive information to corporations that both exploit my data and pass it on to hackers and third-parties is not something I am willing to do.

[–] irate944@piefed.social 4 points 2 hours ago* (last edited 2 hours ago)

I'm firmly against what Discord is doing (and what governments like UK, Australia - and soon others - as well).

The main reason is distrust. I do not trust that they - or anyone - would use this data responsibly and only for its intended purpose.

While I do not doubt that these measures could protect more children - I also do not doubt that these measures will be abused. Businesses will violate whatever privacy we still have left in order to get more money from info-brokers/ad-companies, and governments will use it for control. The US has been proving this with ICE, where they've been using Flock to target people.

That's why I always roll my eyes whenever these kind of measures are introduced. They're always introduced with "think of the children!" right beside them.

There's a reason why Apple - years ago - refused to develop a backdoor for iPhones when FBI requested/ordered them to do. There's just no proper way to prevent abuse with backdoors. Yesterday they wanted to check a criminal's phone, tomorrow they may want to target an annoying journalist.

Same principle with this tracking. Once Discord (or any others) can tell that your account belongs to you (IRL entity), there's nothing that you can do to prevent them from abusing that knowledge. Let's assume that today they use this new system for its intended purpose - who's to say that tomorrow they will?

Not mention the data breach discord suffered last year, where around 70k proof of age IDs were leaked. So not only you have to worry about Discord, you also have to worry about others that may get their hands on your info.

Don't get me wrong, we NEED to improve the safety of children on the internet. I fully support doing this via education, improving parental controls, maybe even banning children from social media apps until a certain age, etc.

But abusing our privacy rights is not it.

[–] Lembot_0006@programming.dev 5 points 2 hours ago* (last edited 2 hours ago) (1 children)

Or maybe it would be more healthy (and cheap and useful) just to teach kids not to share "secrets" with shady people on the Internet?

P.S. And what about non-vulnerable kids? What about adults? All are victims now!

[–] 1dalm@lemmings.world -1 points 2 hours ago

It's the victim's fault!

[–] Skavau@piefed.social 2 points 2 hours ago (1 children)

Should every single platform online be compelled to implement age-ID?

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

I'm open to better alternative ideas, but I really haven't heard any.

But yes. Every single platform that offers the opportunity for kids to interact other users, especially strangers, should have some kind of protections. I think the platforms themselves should be held accountable for what happens on their platforms. Just like the courts have held that the Boy Scouts and the Catholic Church are responsible for protecting kids they serve. Discord doesn't get a pass.

[–] Skavau@piefed.social 2 points 1 hour ago (1 children)

I’m open to better alternative ideas, but I really haven’t heard any.

Can you tell me how you it is logistically viable to conscript tens of thousands of websites to implement age-ID?

You do realise you're interacting on a platform that would shut down if they had to do this because they can't afford it.

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

"You do realise you’re interacting on a platform that would shut down if they had to do this because they can’t afford it."

Yes. And I also believe the fediverse community should take this problem more seriously than it currently does, and not just wait until the government forces them to take it seriously.

One big difference is that the fediverse generally isn't broadly working marketing itself to kids to bring them onto the network, as opposed to other networks that directly market themselves to kids to get the kids locked in at young ages.

[–] Skavau@piefed.social 1 points 1 hour ago (1 children)

Yes. And I also believe the fediverse community should take this problem more seriously than it currently does, and not just wait until the government forces them to take it seriously.

How would the government do that? The Forumverse has 40k members (which is tiny) and it's split up into over 100 instances.

Who do they try and talk to?

How can the Fediverse "take it seriously" when they simply can't afford to?

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

Honestly, saying "we can't afford to take it seriously" is exactly what gets organizations in trouble.

You can't afford not to.

[–] Skavau@piefed.social 1 points 1 hour ago (1 children)

Honestly, saying “we can’t afford to take it seriously” is exactly what gets organizations in trouble.

The fediverse isn't an organisation.

As I asked: How would the government do that? The Forumverse has 40k members (which is tiny) and it’s split up into over 100 instances.

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

You wouldn't have to treat it like an organization. Go after individual hosts. If a police investigation found that a forumverse host was providing an opportunity for a child predators to use their system to blackmail kids into sending nude photos of themselves, then I think the host, the individual, should be held responsible for what happens on their server. Just like they would be held responsible if it happened in their house.

[–] Skavau@piefed.social 1 points 1 hour ago (1 children)

You unironically think that governments are going after hosts that have, in many cases, less than 1000 active monthly users purely because they don't have age-ID services on their platform?

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

100% they would. Yeah.

If child pornography was found to be stored on a host's server by one of their 1000 users, "I didn't think you guys would care about a platform with less than 1000 monthly users" isn't going to be a great argument in courts.

[–] Skavau@piefed.social 1 points 1 hour ago (1 children)

100% they would. Yeah.

How would they know?

If child pornography was found to be stored on a host’s server by one of their 1000 users, “I didn’t think you guys would care about a platform with less than 1000 monthly users” isn’t going to be a great argument in courts.

You're talking here specifically about child pornography. Not just not age-verifying users to access 'adult' content. No server, to my knowledge, allows this.

[–] 1dalm@lemmings.world 1 points 56 minutes ago (1 children)

How would they know?

Well, if, and really when, a predator is caught by the police, that police department will do a full investigation and find all the places they are having communications with kids. Sooner or later, one will be found to be using Lemmy. On that day, the host is going to need a good lawyer.

It's not enough to "not allow this". A person that allows anonymous strangers to use their servers to store information in secret is asking for trouble. They need to take much more care than that.

And I never said that age-verification is the only solution to this problem. >>

[–] Skavau@piefed.social 1 points 55 minutes ago (1 children)

It’s not enough to “not allow this”. A person that allows anonymous strangers to use their servers to store information in secret is asking for trouble. They need to take much more care than that.

What extra care should they take beyond deleting it when they find it? Which they do.

And I never said that age-verification is the only solution to this problem. >>

Remember, I originally started this chain by asking you if every single site online should be forced to implement age-ID and you said yes.

[–] 1dalm@lemmings.world 1 points 49 minutes ago (1 children)

Remember, I originally started this chain by asking you if every single site online should be forced to implement age-ID and you said yes.

Fair. But I really meant that every network should have policies in place, where as age-verification is one option. Elsewhere on this thread you'll see that I offer alternative solutions, such as simply keeping everything public and not allowing 1-to-1 messaging.

[–] Skavau@piefed.social 1 points 48 minutes ago (1 children)

Everything on the fediverse is publically viewable (although Piefed has private communities capacity now), but banning DMs is pretty unacceptable really.

[–] 1dalm@lemmings.world 1 points 36 minutes ago (1 children)

Not exactly, and not for long. Mastodon, for example, is working on end-to-end encryption in messages. Matrix is also private by design.

And again, it's not that I think end-to-end encrypted one-to-one messaging is bad. But if you are going to offer it then you need to be held responsible for it.

[–] Skavau@piefed.social 1 points 35 minutes ago (1 children)

Not exactly, and not for long. Mastodon, for example, is working on end-to-end encryption in messages. Matrix is also private by design.

I meant publicly viewable in the sense of being viewable by the wider audience. Excluding private messages specifically here.

And again, it’s not that I think end-to-end encrypted one-to-one messaging is bad. But if you are going to offer it then you need to be held responsible for it.

So what do you propose then?

[–] 1dalm@lemmings.world 1 points 25 minutes ago (2 children)

ID and age verification for users.

That's not the only solution, and I've offered several others. And I'm also not the only one with ideas. But completely frictionless encrypted anonymous one-to-one communication is probably not going to last much longer. And shouldn't.

[–] Skavau@piefed.social 1 points 22 minutes ago

Also everyone here unironically thinks you're a shill. You're coming to a federated platform promoting big-tech, big-government controls.

[–] Skavau@piefed.social 1 points 23 minutes ago

ID and age verification for users.

Unaffordable. Not going to happen. This is what would kill the independent internet.

[–] Hond@piefed.social 1 points 2 hours ago* (last edited 1 hour ago) (1 children)

Main issue i personally see is that i cant trust big tech. Ever. Especially not with my biometric data or my government id. If my government would have an online service where i could verify my age and in return get somekind official but anonymized hash/string which confirms my age to 3rd parties i wouldnt mind at all.

Instead the whole world is hellbent on deanonymizing everyone on the net while the political landscape in most countries leans more and more towards authoritarianism. Thats a pretty shitty combination in my book.

With all that said more safeguards for children would be great. But why not start with the inherent issues of our current online services where profit stands above all? Most moderation tools on the biggest services are just algos, "ai" and outsourced workforces in the 3rd world as sub-sub-sub-sub contractors. Sure, you cant moderate hundreds of millions of users without any automatation. But cutting into the profits a bit by employing more actual people would probably help a lot already. Emphasis on probably, idk i'm just an random asshole on the internet.

/Also its not great to start your post telling me that i'm an emotional dumb dumb if you want to convince me. While i dont agree with everything you said there were a few new to me insights which i almost didnt read because you came off as an annoying know-it-all in the very first sentence.

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

"Main issue i personally see is that i cant trust big tech. Ever."

Me neither. And a big part of the reason why I personally didn't trust them is because they advertise all these "services" to parents and kids and then only provide any sort of child protections when governments force them too.

(You want me to really get heated, get me started on youth sports!)

[–] Skavau@piefed.social 1 points 1 hour ago (1 children)

There's far more reasons to not trust them, but your grievances here would absolutely force all of us onto big tech as all smaller forums and communities would be forced to shut down.

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

I don't believe that is necessarily correct. I think the fediverse community can figure out a way to police itself, and does so pretty well already. One easy option is that "blocking" people fast and early is generally accepted, and done on a server level. And other is that things are generally more public here, there is less opportunity to pull people into "quite corners" alone.

[–] Skavau@piefed.social 1 points 1 hour ago (1 children)

How can the fediverse police itself here?

Let us know what you propose.

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

The easiest is to keep everything public. Just don't allow 1-to-1 communication at all.

That would be enough to scare off most predators.

[–] Skavau@piefed.social 1 points 1 hour ago (1 children)

The easiest is to keep everything public. Just don’t allow 1-to-1 communication at all.

I think removing private messaging would be very unpopular on here. So that's not going to happen.

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

Probably. So the community needs to figure out how to offer the services they want to have while also protecting children.

[–] Skavau@piefed.social 1 points 1 hour ago (1 children)

So what do you propose then exactly? Private messaging isn't going anywhere.

[–] 1dalm@lemmings.world 1 points 1 hour ago (1 children)

Well, first I would recommend server hosts that "can't afford to protect children" be much more careful who they let onto their personal network.

Second, I would recommend the developer community start training this problem seriously and use the power of the open source development process (which is really good at finding creative solutions to problems) to set this as a development priority.

[–] Skavau@piefed.social 1 points 1 hour ago* (last edited 1 hour ago) (1 children)

I don't think the powers-that-be care about the Fediverse at all bro. Legitimately.

It's far too small. Moreover, it is actually in the terms you refer to - fairly well moderated.

[–] 1dalm@lemmings.world 1 points 55 minutes ago (1 children)

I'm only going to say this one more time.

They 100% will care.

[–] Skavau@piefed.social 1 points 54 minutes ago

If some unique situation where child porn is shared and the instance its on does nothing about it.

But they don't do that.