[-] eth0p@iusearchlinux.fyi 4 points 4 months ago

Empty Walls by Serj Tankian?

[-] eth0p@iusearchlinux.fyi 4 points 1 year ago* (last edited 1 year ago)

Unless something changed in the specification since I read it last, the attested environment payload only contains minimal information. The only information the browser is required to send about the environment is that: this browser is {{the browser ID}}, and it is not being used by a bot (e.g. headless Chrome) or automated process.

Depending on how pedantic people want to be about the definition of DRM, that makes it both DRM and not DRM. It's DRM in the sense that it's "technology to control access to copyrighted material" by blocking bots. But, it's not DRM in the sense that it "enables copyright holders and content creators to manage what users can do with their content."

It's the latter definition that people colloquially know DRM as being. When they're thinking about DRM and its user-hostility, they're thinking about things like Denuvo, HDCP, always-online requirements, and soforth. Technologies that restrict how a user interacts with content after they download/buy it.

Calling web environment integrity "DRM" is at best being pedantic to a definition that the average person doesn't use, and at worst, trying to alarm/incite/anger readers by describing it using an emotionally-charged term. As it stands right now, once someone is granted access to content gated behind web environment integrity, they're free to use it however they want. I can load a website that enforces WEI and run an adblocker to my heart's content, and it can't do anything to stop that once it serves me the page. It can't tell the browser to disable extensions, and it can't enforce integrity of the DOM.

That's not to say it's harmless or can't be turned into user-hostile DRM later, though. There's a number of privacy, usability, ethical, and walled-garden-ecosystem concerns with it right now. If it ever gets to widespread implementation and use, they could later amend it to require sending an extra field that says "user has an adblocker installed". With that knowledge, a website could refuse to serve me the page—and that would be restricing how I use the content in the sense that my options then become their way (with disabled extensions and/or an unmodified DOM) or the highway.

The whole concept of web environment integrity is still dubious and reeks of ulterior motives, but my belief is that calling it "DRM" undermines efforts to push back against it. If the whole point of its creation is to lead way to future DRM efforts (as the latter definition), having a crowd of people raising pitchforks over something they incorrectly claim it does it just gives proponents of WEI an excuse to say "the users don't know what they're talking about" and ignore our feedback as being mob mentality. Feedback pointing out current problems and properly articulating future concerns is a lot harder to sweep under the rug.

[-] eth0p@iusearchlinux.fyi 3 points 1 year ago

For spoofing the user agent, I still think that some level of obscurity could help. The IP address is the most important part, but when sharing an internet connection with multiple people, knowing which type/version of device would help disambiguate between people with that IP (for example, a house with an Android user and an iPhone user). I wouldn't say not having the feature is a deal breaker, but I feel like any step towards making it harder to serve targeted ads is a good step.

Fair point on just using a regular VPN, but I'm hoping for something a bit more granular. It's not that all traffic would need to be proxied, though. If I use some specific Lemmy instance or click on an image/link, that was my choice to trust those websites. The concern here is that simply scrolling past an embedded image will make a request to some third-party website that I don't trust.

[-] eth0p@iusearchlinux.fyi 3 points 1 year ago

Can't have a runtime error if you don't have a compiled binary *taps forehead*

(For the record, I say this as someone who enjoys Rust)

[-] eth0p@iusearchlinux.fyi 3 points 1 year ago* (last edited 1 year ago)

I'm not a lawyer, nor do I have the full context of the legislation you're quoting, but my interpretation of that paragraph is that it only applies to aircrafts that are carrying passengers.

. . . in the air space in possession of another, by a person who is traveling in an aircraft, is privileged . . .

You're the one who does this for a hobby, though. I'm sure that you know the laws more than I do :)

[-] eth0p@iusearchlinux.fyi 4 points 1 year ago* (last edited 1 year ago)

This post title is misleading.

They aren't proposing a way for browsers to DRM page contents and prevent modifications from extensions. This proposal is for an API that allows for details of the browser environment to be shared and cryptographically verified. Think of it like how Android apps have a framework to check that a device is not rooted, except it will also tell you more details like what flavor of OS is being used.

Is it a pointless proposal that will hurt the open web more than it will help? Yes.

Could it be used to enforce DRM? Also, yes. A server could refuse to provide protected content to unverified browsers or browsers running under an environment they don't trust (e.g. Linux).

Does it aim to destroy extensions and adblockers? No.
Straight from the page itself:

Non-goals:

...

  • Enforce or interfere with browser functionality, including plugins and extensions.

Edit: To elaborate on the consequences of the proposal...

Could it be used to prevent ad blocking? Yes. There are two hypothetical ways this could hurt adblock extensions:

  1. As part of the browser "environment" data, the browser could opt to send details about whether built-in ad-block is enabled, any ad-block extensions are enabled, or even if there are any extensions installed at all.

Knowing this data and trusting it's not fake, a website could choose to refuse to serve contents to browsers that have extensions or ad blocking software.

  1. This could lead to a walled-garden web. Browsers that don't support the standard, or minority usage browsers could be prevented from accessing content.

Websites could then require that users visit from a browser that doesn't support adblock extensions.

I'm not saying the proposal is harmless and should be implemented. It has consequences that will hurt both users and adblockers, but it shouldn't be sensationalized to "Google wants to add DRM to web pages".

Edit 2: Most of the recent feedback on the GitHub issues seems to be lacking in feedback on the proposal itself, but here's some good ones that bring up excellent concerns:

[-] eth0p@iusearchlinux.fyi 5 points 1 year ago

From what I can tell, that's basically what this is trying to do. Some company can sign a source image, then other companies can sign the changes made to the image. You can see that the image was created by so-and-so and then manipulated by so-and-other-so, and if you trust them both, you can trust the authenticity of the image.

It's basically git commit signing for images, but with the exclusionary characteristics of certificate signing (for their proposed trust model, at least. It could be used more like PGP, too).

[-] eth0p@iusearchlinux.fyi 5 points 1 year ago

Did the formal education before the job ruin it for you, or did the job itself ruin it?

[-] eth0p@iusearchlinux.fyi 5 points 1 year ago

Oh cool, there's a 200mp camera. Something that only pro photographers care about lol.

Oh this is a fun one! Trained, professional photographers generally don't care either, since more megapixels aren't guaranteed to make better photos.

Consider two sensors that take up the same physical space and capture light with the same efficiency/ability, but are 10 vs 40 megapixels. (Note: Realistically, a higher density would mean design trade-offs and more generous manufacturing tolerances.)

From a physics perspective, the higher megapixel sensor will collect the same amount of light spread over a more dense area. This means that the resolution of the captured light will be higher, but each single pixel will get less overall light.

So imagine we have 40 photons of light:

More Pixels    Less Pixels
-----------    -----------
1 2 1 5         
2 6 2 3         11  11
1 9 0 1         15  3
4 1 1 1         

When you zoom in to the individual pixels, the higher-resolution sensor will appear more noisy. This can be mitigated by pixel binning, which groups (or "bins") those physical pixels into larger, virtual ones—essentially mimicking the lower-resolution sensor. Software can get crafty and try to use some more tricks to de-noise it without ruining the sharpness, though. Or if you could sit completely still for a few seconds, you could significantly lower the ISO and get a better average for each pixel.

Strictly from a physics perspective (and assuming the sensors are the same overall quality), higher megapixel sensors are better simply because you can capture more detail and end up with similar quality when you scale the picture down to whatever you're comparing it against. More detail never hurts.

... Except when it does. Unless you save your photos as RAW (which take a massice amount of space), they're going to be compressed into a lossy image format like JPEG. And the lovely thing about JPEG, is that it takes advantage of human vision to strip away visual information that we generally wouldn't perceive, like slight color changes and high frequency details (like noise!)

And you can probably see where this is going: the way that the photo is encoded and stored destroys data that would have otherwise ensured you could eventually create a comparable (or better) photo. Luckily, though, the image is pre-processed by the camera software before encoding it as a JPEG, applying some of those quality-improving tricks before the data is lost. That leaves you at the mercy of the manufacturer's software, however.

In summary: more megapixels is better in theory. In practice, bad software and image compression negate the advantages that a higher resolution provides, and higher-density sensors likely mean lower-quality data. Also, don't expect more megapixels to mean better zoom. You would need an actual lense for that.

[-] eth0p@iusearchlinux.fyi 3 points 1 year ago

It's a "feature," in fact...

Under What to expect on this support page, it says:

  • The phone branding, network configuration, carrier features, and system apps will be different based on the SIM card you insert or the carrier linked to the eSIM.

  • The new carrier's settings menus will be applied.

  • The previous carrier's apps will be disabled.

The correct approach from a UX perspective would have been to display an out-of-box experience wizard that gives the user an option to either use the recommended defaults, or customize what gets installed.

Unfortunately, many manufacturers don't do that, and just install the apps unconditionally and with system-level permissions. And even if they did, it's likely that many of the carrier apps will either have a manifest value that requires them to be installed, be unlabeled (e.g. com.example.carrier.msm.mdm.MDM), or misleadingly named to appear essential (e.g. "Mobile Services Manager").

[-] eth0p@iusearchlinux.fyi 3 points 1 year ago* (last edited 1 year ago)

I bought an unlocked phone directly from the manufacturer and still didn't get the choice.

Inserting a SIM card wiped the phone and provisioned it, installing all sorts of carrier-provided apps with system-level permissions.

As far as I've found, there's a few possible solutions:

  • Unlock the bootloader and install a custom ROM that doesn't automatically install carrier-provided apps. (Warning: This will blow the E-fuse on Samsung devices, disabling biometrics and other features provided by their proprietary HSM).

  • Manually disable the apps after they're forcibly installed for you. Install adb on a computer and use pm disable-user --user 0 the.app.package on every app you don't want. If your OEM ROM is particularly scummy, it might go out of its way to periodically re-enable some of them, though.

  • Find a SIM card for a carrier that doesn't install any apps, then insert that into a fresh phone and hope that the phone doesn't adopt the new carrier's apps (or wipe the phone) when you insert your actual SIM.

[-] eth0p@iusearchlinux.fyi 3 points 1 year ago

Would absolutely love for Serif Labs to create a port for Affinity Photo and Designer. Of the programs I've tried, those two have the closest UX to Photoshop and Illustrator without the software-as-a-service model.

Hell, I'd even take it if all they did was support it working under WINE. While I would prefer a seamless UI that fits in with both GTK and Qt, it's understandable that they might not consider it worth the effort.

view more: ‹ prev next ›

eth0p

joined 1 year ago