47
submitted 2 weeks ago by sysadmin@lemmy.world to c/pop_os@lemmy.world
57
submitted 3 weeks ago by sysadmin@lemmy.world to c/pop_os@lemmy.world

Privacy Front-end Nitter:

https://xcancel.com/carlrichell/status/1815498238285562127

https://nitter.privacydev.net/carlrichell/status/1815498238285562127

Extracted from Twitter:

The first alpha release of Pop!_OS 24.04 with COSMIC will be released August 8th.

@jeremy_soller , Maria, and I join the System76 Transmission Log pod to chat about how COSMIC came to be and where it’s headed.

https://system76.transistor.fm/10

30
submitted 6 months ago by sysadmin@lemmy.world to c/framework@lemmy.ml
55
submitted 8 months ago by sysadmin@lemmy.world to c/framework@lemmy.ml

On their order page, it says "Ships within two weeks"

Cheers!

20
submitted 8 months ago by sysadmin@lemmy.world to c/framework@lemmy.ml
12
submitted 8 months ago by sysadmin@lemmy.world to c/framework@lemmy.ml
19
submitted 9 months ago by sysadmin@lemmy.world to c/framework@lemmy.ml
6
dbrand Framework 13 Skin (www.youtube.com)
submitted 9 months ago by sysadmin@lemmy.world to c/framework@lemmy.ml
5
submitted 9 months ago by sysadmin@lemmy.world to c/framework@lemmy.ml
170
submitted 1 year ago* (last edited 1 year ago) by sysadmin@lemmy.world to c/privacy@lemmy.ml

Ever thought, "Why should I care about online privacy? I have nothing to hide." Read this https://www.socialcooling.com/

credit: [deleted] user on Reddit.

original link: https://old.reddit.com/r/privacy/comments/savz9u/i_have_nothing_to_hide_why_should_i_care_about/

u/magicmulder

The main issue isn’t that someone would be interested in you personally but that data mining may put you in categories you don’t want to be in. 99.9% correlation of your „likes“ and follows to those of terror suspects - whoops you’re a terror suspect yourself. You follow heavy metal bands and Harley Davidson? Whoops, you have a 98% likelihood of drinking and smoking, up goes your insurance rate. And so on.

u/Mayayana

Indeed. But most people here seem to have misunderstood your post. One of my favorite examples is from Eric Schmidt, chairman of Google, whoo said in an interview (on youtube) that if you think you have something to hide then maybe you shouldn't be doing what you're doing. (Like maybe the Jews on Kristallnacht shouldn't have been living in their houses?) Schmidt was later reported to have got an apartment in NYC without a doorman, to avoid gossip about his promiscuous lifestyle. :)

u/SandboxedCapybara

I always thought the like "no bathroom door," "no curtains," or "no free speech" arguments always fell flat when talking about privacy. Sure, as people who already care about privacy they make sense, but for people who don't they are just such hollow arguments. I think a better argument is real life issues that people always face. The fact that things like their home address, social security number, face, email, phone number, passwords, their emails and texts, etc could be out there for anyone to see soon or may already be is almost always more concerning for people. People trust companies. People don't trust people.

u/Striking-Implement52

Another good read: https://thenewoil.org/why.html 'I've Got Nothing to Hide' and Other Misunderstandings of Privacy

etc

130
submitted 1 year ago by sysadmin@lemmy.world to c/firefox@lemmy.ml

Even after all these years firefox keeps using mozilla hidden directory instead of XDG base directories. For how long will this continue?

Watch https://bugzilla.mozilla.org/show_bug.cgi?id=259356 for updates to this request.

~/.mozilla/firefox/ is a mish-mash of data, config, and cache. It's not simple to unravel that. Beyond that, it would be a breaking change, and that requires more caution.

credit: u/yo_99 on Reddit.

original link: https://old.reddit.com/r/firefox/comments/vkgk78/why_does_firefox_keeps_using_mozilla_directory/

[-] sysadmin@lemmy.world 12 points 1 year ago

There are two main aspects to coreboot in my opinion that differentiate it from other firmware ecosystems:

The first is a strong push towards having a single code base for lots of boards (and, these days, architectures). Historically, most firmware is build in a model I like to call "copy&adapt": The producer of a device picks the closest reference code (probably a board support package), adapts it to work with their device, builds the binary and puts it on the device, then moves to the next device.

Maintenance is hard in such a setup: If you find a bug in common code you'll have to backport the fix to all these copies of the source code, hope it doesn't break anything else, and build all these different trees. Building a 5 year old coreboot tree on a modern OS is quite the exercise, but many firmware projects are near impossible to build under such circumstances.

With coreboot, we encourage developers to push their changes to the common tree. We maintain it there, but we also expect the device owner (either the original developer or some interested user) in helping with that, at least with testing but more ideally with code contributions to keep it up to current standards of the surrounding code. A somewhat maintained board is typically brought up to latest standards in less than a day if a new build is required, and that means that everybody has an easy time to do a new build when necessary.

The second aspect is our separation of responsibilities: Where BIOS mandates the OS-facing APIs and not much else (with lots of deviation in how that standard is implemented), UEFI (and other projects like u-boot) tends to go the other extreme: with UEFI you buy into everything from build system, boot drivers, OS APIs and user interface. If you need something that only provides 10% of UEFI you'll be having a hard time.

With coreboot we split responsibilities between 2 parts: coreboot does the hardware initialization (and comes with its build system for the coreboot part, and drivers, but barely any OS APIs and no user interface). The payload is responsible for providing interfaces to the OS and user (and we can use Tianocore to provide a UEFI experience on top of coreboot's initialization, or seabios, grub2, u-boot, Linux, or any program you build for the purpose of running as payload).

The interface between coreboot and the payload is pretty minimal: the payload's entry point is well-defined, and there's a data table in memory that describes certain system properties. In particular the interface defines no code to call into (including: no drivers), which we found complicates things and paints the firmware architecture into a corner.

To help payload developers, coreboot also provides libpayload, a set of minimal libraries implementing libc, ncurses and various other things we found useful, plus standard drivers. It's up to each coreboot user/vendor if they want to use that or rather go for whatever else they want.

credit: [deleted] user on Reddit.

22
submitted 1 year ago* (last edited 1 year ago) by sysadmin@lemmy.world to c/pop_os@lemmy.world

Hi everyone, I have just recently found out there is a thing like coreboot/libreboot, and I like the concept of it: fast(er), secure, open source, easy to flash and non-brickable process.

I’ve been trying to understand the basics behind it and it’s too difficult for me. I have some basic understanding of what BIOS / EFI is. And as I understand it, the core/libreboot is an open-source replacement for it. Great!

But what I’m interested in is understanding, how it manages to be better than the OEM’s BIOS? I understand that the nature of open-source is better than closed source software, but what I don’t understand is how this project manages to be better for end-user?

As I get it, it’s similar to Custom ROMs on Android. There is an OEM’s rom — say, Samsung — it makes its version of Android, and it’s good (in terms of how it works with the hardware), but usually with tons of bloatware and OEM never updates the phone after a customer bought it. Here we have Custom ROMs, like CyanogenMod / Lineage OS / Pixel Experience / etc. etc. Those ROMs somehow manage to keep the software updated for much longer time-frame, having extra functionality and even working faster. (Frankly, I don’t understand how do they manage to do that as well, and why it’s so difficult for OEMs.)

Is this something similar? I can understand the (ineffective) processes of big corporations, but I cannot understand how the developers manage to keep those things better, lighter, etc. Say, whether the OEM’s firmwares somehow bloated? Why is so then? Why won’t a big company like Gigabyte, Asus, Acer, etc. also use this product, why do they write so-closed-source BIOSes and EFIs then, if they can use something lighter and faster, and in so many ways better? As it’s advertised on the website of coreboot.

I’m not sure I keep the question simple, for others to understand, but if talking about the real hardware. Say, I have Asus MAXIMUS IV GENE-Z motherboard. Can I install coreboot on it (seems like yes, according to the website https://coreboot.org/status/board-status.html#asus/maximus_iv_gene-z), and if I can, will it miss some functionality comparing to its original EFI? I mean not that I need it, but I’m interested whether there’s something special in original firmware or not. There are many things on the website, at ‘ROG Exclusive Features’ and ‘Special Features’ sections, but I’m not aware if it’s something special or it’s just some marketing bullshit, is it located in the firmware, or it’s something entirely different they speak of in that section?

Please pardon me if the question is too newbish and was answered somewhere. I’ve tried to do my search and found no information on my question. I would appreciate any comment on this topic. Thanks!

edit: Found Why use coreboot? (reddit post) And it’s an interesting read itself, but it’s not the question I’m trying to find answer to.

credit: u/walteweiss on Reddit.

original link: https://www.reddit.com/r/coreboot/comments/bgjzth/how_does_coreboot_manage_to_be_better_than/

[-] sysadmin@lemmy.world 1 points 1 year ago

As a side note about BIOS

Framework’s official stance on Coreboot:

“As this keeps popping up even after multiple responses, let this be the “official” response so we can put this to bed, at least for now.

It is not that Framework “does not care” about Coreboot, it is that we have a very long list of priorities for a very small team (we are less than 50 globally and have existed for less than 3 years) and while being able to support Coreboot would be fantastic, it is just not a priority for Framework right now given the sheer number of initiatives that we have to launch now and in the immediate future. We pivot from one NPI (New Product Introduction) to the next, back to back, and have since our first product launch. Our firmware/BIOS team is small and is supplemented by an outside 3rd Party partner. The consistent, “well, just hire more people then” is unfortunate as those in the know understand that’s not how it works, especially for a small, private company trying to exist in a very mature market segment. While tech in general is shrinking, layoffs are in the news constantly, and global economies are getting hit hard, we’re still here, releasing new products, and working hard to support everything we’ve already launched.

If and when we decide to add Coreboot to the docket of active projects, we’ll let the Community know, but if you want Framework to continue to exist, and you believe in our mission, we’ll have to continue to ask for your patience. If not having Coreboot is a blocker for you, personally, to join the Framework Family, we do hope that we can earn your business in the future.”

https://community.frame.work/t/responded-coreboot-on-the-framework-laptop/791/239

[-] sysadmin@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

the 7640u and 7840u are both rated for a default TDP of 28w, although it is configurable as low as 15w by the laptop manufacturer.

That reference seems to be using the default for the 7840u, whereas they're using the configurable minimum for the 7640u, which is misleading.

The 7840u and 7640u are actually the exact same chip, just the 7640u has 2 CPU cores and 4 GPU cores disabled.

Ryzen is pretty good at putting cores to sleep when they aren't needed, so when at idle or running a load that can't take advantage of those cores the 7840u should behave pretty much the same as a 7640u and have similar power consumption.

Then when under heavy loads both CPUs will likely hit whatever the maximum power the cooler can handle is, however having more cores each running at lower power (ex. 7840u) generally performs better than fewer cores each running at higher power (ex. 7640u).

So under heavy loads the 7840u should actually have better performance with similar power consumption, however the better performance allows it to complete the task quicker and get back to low power idle sooner, overall improving battery life.

So theoretically the 7840u should overall have similar to slightly better battery life than the 7640u assuming all software is implemented properly (I was an early adopter of Ryzen 3000 desktop CPUs and it took several driver/BIOS updates before it would reliably put unneeded cores to sleep and significantly reduce idle/low load power consumption).

++

credit: u/RiftBladeMC on Reddit and @RiftBlade@lemmy.world on Lemmy.

original link: https://old.reddit.com/r/framework/comments/13dz5nb/comment/jjnv1nq/?utm_source=share&utm_medium=web2x&context=3

[-] sysadmin@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

With the workloads you listed the only place that you may have a noticeable difference is in gaming. But if the games you play are not very intensive then you will only see a negligible improvement

For that use case, the Ryzen 5 seems perfectly suitable. It's what I pre-ordered myself, with a similar expected workload.

This is data on a previous generation Ryzen 5: https://pc-builds.com/fps-calculator/result/1fB1dg/4T/dragon-age-inquisition/ This might be helpful too: https://www.youtube.com/watch?v=ykRYYl6xSpo

++

credit: u/runed_golem on Reddit

original link: https://old.reddit.com/r/framework/comments/13dz5nb/comment/jjnow91/?utm_source=share&utm_medium=web2x&context=3

[-] sysadmin@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

thanks for the interest -- according to the instructions on Lemmy, "the person has to post a comment in the community, before there will be an option to appoint as mod..." please go ahead and post something on the community anytime and we can go from there :)

[-] sysadmin@lemmy.world 10 points 1 year ago* (last edited 1 year ago)

If you're already on a Linux-based operating system, and you gotta run a real instance of Windows for some reason, your safest bet from both a security and privacy standpoint is to run it in a virtual machine (I like VirtualBox, personally, but VMWare, or whatever else will do the job fine also) and firewall the hell out of it. In a virtual machine, you can totally lock it down as much or as little as you need for the task at hand, and ain't a damned thing Windows itself can really do about it, and as an added bonus, it saves you from the required reboots of dual-booting. It's confined to a "safe space" (until you start opening enabling network stuff and opening ports to it). You're in control.

edit: or QEMU/KVM (with virt-manager)

[-] sysadmin@lemmy.world 3 points 1 year ago

Really you'd have to fire up Wireshark and see what telemetry Windows was blabbing away behind your back. Analysing those logs can be a tedious business, especially as you'd need a large dataset.

Thing with just about any tech related question posted is likely some geek will have done the heavy lifting for you already. Here is a nice start:

https://www.zdnet.com/article/windows-10-and-telemetry-time-for-a-simple-network-analysis/

Here is another one:

https://www.comparitech.com/blog/information-security/windows-10-data/

That's logs required to be collected, doesn't say whether or not the data is sent back to Windows. Best assume yes.

Course, all that proprietary software will have a voluminous licence agreement that nobody reads. They'll collect as much data as they can to "maximise user experience" or whatever rubbish.

[-] sysadmin@lemmy.world 12 points 1 year ago

Pro is a little bit better because of features like Bitlocker. A lot better would be Education/Enterprise variant. You'd need special licenses for running enterprise I think. There are also registry hacks that would give you some protection against telemetry (I personally haven't done this).

Privacy-wise though, any "windows" is going to fare lower than linux is what I'd say. Wait for others in the sub for more insights.

[-] sysadmin@lemmy.world 1 points 1 year ago

The "store now, decrypt later" is an issue with public key cryptography- which is most internet traffic. Symmetric encryption isn't really messed up by quantum computing even in theory- your 256 bit thing might become effectively a 128 bit thing, but that's still incredibly impossible to worry about (there's some general purpose algorithm that requires a quantum computer that would generally halve the key size I think).

What is likely threatened by quantum computing are public key algorithms that work on the idea of one way being easy, and another way being hard. Like factoring- multiplication of huge numbers is fast, factoring them is not. Shor's algorithm is the famous one to be able to do this fast enough given a good quantum computer. But a lot of these allegedly one-way functions would be varying degrees of screwed up in the so-called 'post-quantum world'.

In a normal SSL connection, you use public key cryptography to exchange a symmetric key, then you use that. So if you were to record an entire SSL connection and then in the future be given a big quantum computer, you could in theory work it all out- first by undoing the public key initial piece, and then by reading the symmetric key directly, at which point you would be able to decrypt the remainder normally.

From my understanding, standard notes wouldn't actually be subject to this, as it never transmits your actual key- you encrypt it with your real key locally, and then it gets sent as TLS stuff. So while the public key could be discovered, and the private key for the TLS session, the actual payload data would be encrypted with a key derived from your password that is never transmitted.

Now, if it does actually transmit that key at some point, then all bets are off. But it couldn't really be secure if it transmitted your key anyway right? So it probably doesn't do that.

view more: next ›

sysadmin

joined 1 year ago
MODERATOR OF