this post was submitted on 31 Mar 2026
-2 points (37.5% liked)

Change My View

40 readers
27 users here now

A place to learn something new, or strengthen your own position. Progress is impossible without a willingness to change.

#Rules

  1. Remain civil and friendly. Personal attacks, excessive snark, or similar will not be tolerated. Downvoting based on disagreement (rather than quality of discourse) may also be bannable.

  2. All posts should contain a view as the title, and should have an explanation of the reasoning in the body.

  3. All top level comments should address the original viewpoint, either challenging it, or seeking clarification.

founded 1 day ago
MODERATORS
 

With California's AB1043, this was on my mind, although wasn't specifically about that law. Generally, giving users more control is a good thing, esspecially when it means excluding potentially distressing or harmful content. In general, having filtering settings like this provides a way for users to pick and chose what they want to see. While I don't think an age value is the best way of implementing it, I do think it is likely to be better than having nothing at all.

So long as its only a local value, the only significant downside I see, is its use for fingerprinting and tracking. This is an issue, but being only one number, is relatively inspecific and unreliable. User agent strings provide far more data, and are far harder to manipulate meaningfully, for example. Furthermore, so long as its all managed locally, privacy focused software would also have the ability to either not provide the value, to use brackets in UI rather than a asking for a specific number, or to just use a default value, like 99. Given that, it seems like an age flag would be just another in a sea of fingerprinting methods, while the convenience and utility provided could be significant.

Ultimately, I feel like a series of boolean flags for different subject matters to filter would be better, but because an age value seems closer to being implemented, thats my focus.

So, having a local, "age" flag used for filtering content isn't a bad idea.

Change my view.

top 13 comments
sorted by: hot top controversial new old
[–] Eyron@lemmy.world 2 points 4 hours ago* (last edited 4 hours ago)

Your idea is a good one. The main way they want to use that "local" age flag still sends a signal about the user to the server. Even if it’s coarse, that's tracking and privacy concern.

I believe repeating what's worked might be the better option. Other media solved this with standardized rating systems. The internet could add a similar content signal alongside this. A similar content-rating header (G, PG, PG-13, R, M) could achieve many of the same goals as an age flag or boolean filters.

The ratings describe the content, not the user. Filtering can stay local, without disclosing anything about the user. Current parental controls rely on blocklists or detection, which are unreliable. A standard rating signal would allow them to be simpler and more consistent. Operating systems and/or browsers still have to allow controls, but that could look more like locally selecting ratings than strict age checks.

I think more critics need to recognize that without workable alternatives, others will push for these bad solutions. The good news is this isn’t new, and other industries seem to have mostly avoided making these systems legal requirements.

[–] stoy@lemmy.zip 6 points 8 hours ago (2 children)

As you present the argument, it only being a local variable on your own computer, I agree with you.

But, the way I see it, the issue is that it sets up the infrastructure for further control.

Once established as a new normal feature, there will be an (real or manufactured) incident where kids will have gained access to porn or other mature content, the media will go into overdrive, now pushing for government controlled age checks instead of the local checks from the past.

The infrastructure is there, and you have nothing to hide, so don't worry!

[–] halfwaythere@lemmy.world 2 points 6 hours ago

Exactly this is a slippery slope scenario. Next they are going to want the camera to take snapshots of the user to verify that the user logged in is actually who they say they are at all times.

Some will call this comment alarmist but a little critical thinking can see this is the direction the government and 3rd parties are eyeing if not already planning for. Just look at Microsoft trying to take snapshots of what you are working on throughout your session.

[–] PlzGivHugs@sh.itjust.works 1 points 7 hours ago (1 children)

But like, the government can do that (and are already doing so) just as easily without. Adding any form of verification to an OS flag is far more difficult to implement, since an OS has to function in a variety of situations (such as without internet) and unlike when added to a service provider, if its on a local machine, the user can just change it (or install a different OS).

[–] halfwaythere@lemmy.world 1 points 6 hours ago (1 children)

So just because the three letter agencies are already doing this doesn't mean they should be. Also doesn't mean that we should just accept it.

if its on a local machine, the user can just change it (or install a different OS).

This also is a gaping hole in their plan. And by California definition all OS are to have this check... So if they all conform and give into this demand then what OS will they change to?

[–] PlzGivHugs@sh.itjust.works 1 points 5 hours ago* (last edited 5 hours ago) (1 children)

This also is a gaping hole in their plan. And by California definition all OS are to have this check... So if they all conform and give into this demand then what OS will they change to?

But like, thats my point. Its effectively impossible to regulate an idea on the internet or on individual's devices. There will always be other options for OSes, and anyone can always make a copy, or make their own. Thats why locally stored age values are useless for age verification - because there is no realistic way to enforce or regulate its use legally.

All thats left is a user-controlled content filter.

[–] moopet@sh.itjust.works 2 points 5 hours ago

Why will there always be an option for another OS? I think it's perfectly plausible that the government(s) would require hardware manufacturers to limit OS installation to "approved" software with revokable keys.

[–] halfwaythere@lemmy.world 3 points 8 hours ago (2 children)

There are plenty of ways to perform "filtering" available that don't rely on an age definition.

Parental control software are abundant. Routers (some). DNS filters, PiHole, site blocking etc . I feel that the responsibility is being misplaced on the OS rather than on the content providers. Ex. Software/games if they are targeting the younger crowd or adults only then they should be policing their community.

As you said using age isnt the best route due to fingerprinting and the potential to use this information for even worse targeting (predators). The benefits don't outweigh the negatives.

[–] midribbon_action@lemmy.blahaj.zone 4 points 7 hours ago (1 children)

All of those methods require accesss to a router and technical knowledge. Actually, it requires access to all routers your child will ever use, in case they ever switch networks or use cell data. There are ways to install spyware on phones and computers that will work to limit content everywhere, but again that requires technical skill, and investigating whether the spyware is privacy protecting, which is far from always true. A software level approach is going to lock out a lot of non technical people and actively harm a lot of others. A question at account creation time about whether the user should be considered a child would make it difficult for that user to circumvent that label without admin access, and requires no spyware. I think it's a good idea. I don't necessarily know how I feel about putting it into laws though, making it mandatory.

[–] OwOarchist@pawb.social 2 points 7 hours ago (2 children)

A question at account creation time about whether the user should be considered a child

A simple on/off toggle for 'child protection mode' should suffice.

The device does not need to know any user's exact age, at any time. That's just more personally identifiable information that can be used to identify and track you.

[–] midribbon_action@lemmy.blahaj.zone 3 points 7 hours ago* (last edited 7 hours ago)

Yeah, exactly. A lot of these laws require an age input but only expose the age 'bracket' a user is in, usually 3 or 4, with one being 'adult', which I think is also ok. Because a very young child should be treated different from a teenager. But, yeah I would prefer you simply choose the bracket, as the parent, then have to input the exact age of your child.

Edit: to be honest though, the username field is probably just as valuable as the age, from a privacy perspective. Some people even enter their full names when creating a user account. So I feel like the age thing, of course you can just lie and put the wrong age, but it's really splitting hairs in the privacy debates, when we literally have companies videochatting with people to verify their government ids to use some social media, which is fucking insane

[–] PlzGivHugs@sh.itjust.works 2 points 7 hours ago

A simple on/off toggle for 'child protection mode' should suffice.

There are different types and severities to content. For example, seeing a video of someone being beaten might be something you want to block children viewing, whereas a gore video might be blocked for children and teens. As I originally said, I don't think an age flag is the best metric, but I think its better than nothing in this area.

The device does not need to know any user's exact age, at any time. That's just more personally identifiable information that can be used to identify and track you.

A user-chosen age metric, esspecially when tied to content filtering, is such an unhelpful metric compared everything else they already have though. Like, those tracking user data have things like user agents, screen resolution, nonetheless people using the same username on every site or providing name outright.

[–] OwOarchist@pawb.social 3 points 7 hours ago

I feel that the responsibility is being misplaced on the OS rather than on the content providers.

And, tellingly, a lot of these laws are being pushed by content providers, namely Meta (Facebook). When you follow the money behind these law proposals to its source, Meta's fingerprints are all over it.

To put it charitably, they're trying to push the (potentially expensive) task of preventing minors from seeing objectionable content away from themselves and onto others where they don't need to pay for it. To take a more cynical view, they're pushing it in order to get yet another way to track (and then profit off of) users.