Are you using /dev/zero as the input for dd, such as dd if=/dev/zero of=/dev/<HD to erase>
? This is how I used to do this but I've recently switched to using the little tool 'nwipe' which allows to wipe multiple drives in parallel and show proper progress.
I prefer shred
for erasing magnetic drives. dd
can work too, but its options are arcane enough that it's easy to make mistakes that lead to weird behavior.
If that doesn't fix the unexpected size problem, I would suspect the USB bridge in your dock. Those things are notoriously buggy.
Connecting directly with SATA is a more reliable approach. It also lets you use hdparm
to tell the drive to run a secure erase cycle on itself.
Could be a bad dock or usb controller, try a different one. Otherwise just snap the sata connector off, and most people will not bother to get anything off.
By stops, you mean dd exits or hangs? Does dd throw an error/nonzero exit code? Anything in dmesg?
There's no automatic mounting going on in the background, right?
Exits usually with disk full
Try mounting them. What is the dd command you’re using?
Now, depending on your threat model maybe a drill and a sledgehammer will be enough. If your threat model is rather high and it is really sensitive data, well you’ll have to spend that money if you can’t get zeros written.
I am a private person without any state or corporate secrets in hand. I do not do online banking.
My threat model I believe is limited to random drive-by actors.
I was hoping to be able to provide these drives to others to use, the screwdriver and hammer will render them into E-waste. But on that issue, once the platter assembly is disassembled and the platers are separated and mixed up the data on them is probably not recoverable? With the given that each drive has multiple platters.
A few gigs of zeroes will prevent random drive-bys. At that point the partition and filesystem table of at least the first partition is overwritten and you "can" recover files off it but you'll be missing filenames and at least half the files will be corrupt due to fragmentation losing track of which files are where.
I agree with Ono that shred is a good tool for this. If you don't want to use that, try increasing the block size to at least 1M if not 16M to reduce the overhead.
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0