this post was submitted on 19 May 2025
82 points (96.6% liked)
Linux
54446 readers
438 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I agree dd isn't useful for individual files. I contend that if I have an SSD of size X, and I write X amount of random bytes to it, there's nothing magic about the SSD construction that will preserve any previous information on the drive. Wear leveling can not magically make the drive store more data than it can hold.
Well, in fact it can. That's "overprovisioning". The SSD has some amount of reserved space as replacement for bad cells, and maybe to speed things up. So if you overwrite 100% of what you've access to on the SSD, you'd still have X amount of data you didn't catch. But loosely speaking you're right. If you overwrite the entire SSD and not just files or one partition or something like that, you'd force it to replace most of the content.
I wouldn't recommend it, though. There is secure erase, blkdiscard and some nvme format commands which do it the right way. And 'dd' is just a method that get's it about right (though not 100%) in one specific case.
Hum. I read that
blkdiscard
only marks the blocks (cells?) as empty, and doesn't change the contents; and that a sophisticated enough lab can still read the bits.In particular, the disk has to claim to support "Deterministic read ZEROs after TRIM"; if it doesn't, you have no guarantee of erasure. Without knowing anything about the make and model,
blkdiscard
would be categorically less secure.Right?
Yes, thanks. Just invalidating or trimming the memory doesn't cut it. OP wants it erased so it needs to be one of the proper erase commands. I think blkdiscard also has flags for that, so I believe you could do it with that command as well, if it's supported by the device and you append the correct options. (zero, secure) I think other commands are easier to use (if supported).
I did read (on the Arch wiki) that
blkdiscard -z
is identical todd if=/dev/zero
, so that tracks. It's (blkdiscard
) is easier to use. However, given my memory and how infrequently I'll ever use it, I'll have forgotten the name of the command by next week. I'll never forgetdd
, though, mainly because it's more general purpose and I use it occasionally.OP probably wants
blkdiscard -z
, though.I'm not sure about that. I think OP wants something like ATA secure erase. That would be
hdparm
and a bunch of options, and not blkdiscard. Unless they specifically know what they're doing and what options to pick. And what the controller will do in return.But it can store more data than it tells you it can. All drives are actually lying about their capacity; they all have extra sectors to replace bad ones.
Not that much extra.
Enough to not consider it securely erased.