this post was submitted on 31 Mar 2026
-2 points (37.5% liked)

Change My View

40 readers
27 users here now

A place to learn something new, or strengthen your own position. Progress is impossible without a willingness to change.

#Rules

  1. Remain civil and friendly. Personal attacks, excessive snark, or similar will not be tolerated. Downvoting based on disagreement (rather than quality of discourse) may also be bannable.

  2. All posts should contain a view as the title, and should have an explanation of the reasoning in the body.

  3. All top level comments should address the original viewpoint, either challenging it, or seeking clarification.

founded 1 day ago
MODERATORS
 

With California's AB1043, this was on my mind, although wasn't specifically about that law. Generally, giving users more control is a good thing, esspecially when it means excluding potentially distressing or harmful content. In general, having filtering settings like this provides a way for users to pick and chose what they want to see. While I don't think an age value is the best way of implementing it, I do think it is likely to be better than having nothing at all.

So long as its only a local value, the only significant downside I see, is its use for fingerprinting and tracking. This is an issue, but being only one number, is relatively inspecific and unreliable. User agent strings provide far more data, and are far harder to manipulate meaningfully, for example. Furthermore, so long as its all managed locally, privacy focused software would also have the ability to either not provide the value, to use brackets in UI rather than a asking for a specific number, or to just use a default value, like 99. Given that, it seems like an age flag would be just another in a sea of fingerprinting methods, while the convenience and utility provided could be significant.

Ultimately, I feel like a series of boolean flags for different subject matters to filter would be better, but because an age value seems closer to being implemented, thats my focus.

So, having a local, "age" flag used for filtering content isn't a bad idea.

Change my view.

you are viewing a single comment's thread
view the rest of the comments
[–] halfwaythere@lemmy.world 3 points 10 hours ago (2 children)

There are plenty of ways to perform "filtering" available that don't rely on an age definition.

Parental control software are abundant. Routers (some). DNS filters, PiHole, site blocking etc . I feel that the responsibility is being misplaced on the OS rather than on the content providers. Ex. Software/games if they are targeting the younger crowd or adults only then they should be policing their community.

As you said using age isnt the best route due to fingerprinting and the potential to use this information for even worse targeting (predators). The benefits don't outweigh the negatives.

[–] midribbon_action@lemmy.blahaj.zone 4 points 9 hours ago (1 children)

All of those methods require accesss to a router and technical knowledge. Actually, it requires access to all routers your child will ever use, in case they ever switch networks or use cell data. There are ways to install spyware on phones and computers that will work to limit content everywhere, but again that requires technical skill, and investigating whether the spyware is privacy protecting, which is far from always true. A software level approach is going to lock out a lot of non technical people and actively harm a lot of others. A question at account creation time about whether the user should be considered a child would make it difficult for that user to circumvent that label without admin access, and requires no spyware. I think it's a good idea. I don't necessarily know how I feel about putting it into laws though, making it mandatory.

[–] OwOarchist@pawb.social 2 points 9 hours ago (2 children)

A question at account creation time about whether the user should be considered a child

A simple on/off toggle for 'child protection mode' should suffice.

The device does not need to know any user's exact age, at any time. That's just more personally identifiable information that can be used to identify and track you.

[–] midribbon_action@lemmy.blahaj.zone 3 points 9 hours ago* (last edited 9 hours ago)

Yeah, exactly. A lot of these laws require an age input but only expose the age 'bracket' a user is in, usually 3 or 4, with one being 'adult', which I think is also ok. Because a very young child should be treated different from a teenager. But, yeah I would prefer you simply choose the bracket, as the parent, then have to input the exact age of your child.

Edit: to be honest though, the username field is probably just as valuable as the age, from a privacy perspective. Some people even enter their full names when creating a user account. So I feel like the age thing, of course you can just lie and put the wrong age, but it's really splitting hairs in the privacy debates, when we literally have companies videochatting with people to verify their government ids to use some social media, which is fucking insane

[–] PlzGivHugs@sh.itjust.works 2 points 8 hours ago

A simple on/off toggle for 'child protection mode' should suffice.

There are different types and severities to content. For example, seeing a video of someone being beaten might be something you want to block children viewing, whereas a gore video might be blocked for children and teens. As I originally said, I don't think an age flag is the best metric, but I think its better than nothing in this area.

The device does not need to know any user's exact age, at any time. That's just more personally identifiable information that can be used to identify and track you.

A user-chosen age metric, esspecially when tied to content filtering, is such an unhelpful metric compared everything else they already have though. Like, those tracking user data have things like user agents, screen resolution, nonetheless people using the same username on every site or providing name outright.

[–] OwOarchist@pawb.social 3 points 9 hours ago

I feel that the responsibility is being misplaced on the OS rather than on the content providers.

And, tellingly, a lot of these laws are being pushed by content providers, namely Meta (Facebook). When you follow the money behind these law proposals to its source, Meta's fingerprints are all over it.

To put it charitably, they're trying to push the (potentially expensive) task of preventing minors from seeing objectionable content away from themselves and onto others where they don't need to pay for it. To take a more cynical view, they're pushing it in order to get yet another way to track (and then profit off of) users.