this post was submitted on 15 Feb 2026
30 points (96.9% liked)

No Stupid Questions

47063 readers
852 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

update 2: The Linux community has suggested that I use a tar file to backup, as this preserves symlinks. With that, the home directory now takes up just 290 ish GiB, as it should. Now I will be distro hopping, wish me luck!

update: I was able to copy it! There are still some folders that are really big (as many have said, it is probably because symlinks aren't supported in exFAT. When I transfer these files over to btrfs, will the symlinks come back or are they permanently gone?) but, with the uninstallation of Steam and Kdenlive (each taking a ridiculous amount of storage), removing a couple games I don't really play, and removing old folders that lingered around from already uninstalled programs means I now have enough space to fit my home folder in the SSD (like 23 GiB left, so the lack of symlinks still hurts, but still, it fits!)

When running

rsync -Paz /home/sbird "/run/media/sbird/My Passport/sbird"

As told by someone, I run into a ran out of storage error midway. Why is this? My disk usage is about 385 GiB for my home folder, and there is around 780 GiB of space in the external SSD (which already has stuff like photos and documents). Does rsync make doubly copies of it or something? That would be kind of silly. Or is it some other issue?

Note that the SSD is from a reputable brand (Western Digital) so it is unlikely that it is reporting a fake amount of storage.

EDIT: Wait, is it because my laptop SSD is BTRFS and the external SSD is exFAT? Could that be the issue? That would be kind of weird, why would files become so much more bigger with the external SSD?

Thanks everyone for your help to troubleshoot! It was super helpful! Now I need to go to bed, since I've been up so late it's already tomorrow!

top 50 comments
sorted by: hot top controversial new old
[–] drkt@scribe.disroot.org 12 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

rsync does not delete files at the target by default, it has kept all of the original files when they were deleted from the original source location.

You must specify --delete for it to also delete files at the target location when they are deleted at the source.

If you want to be extra safe, you can use --delete-first to run the deletion process before moving files, ensuring that you always have space at the target.

[–] sbeak@sopuli.xyz 3 points 3 weeks ago (1 children)

The directory "sbird" in the SSD did not exist beforehand though?

[–] drkt@scribe.disroot.org 5 points 3 weeks ago

Are you saying this is your first run?

Run 'ncdu /run/media/sbird' to find out why there's no space on it.

[–] SlurpingPus@lemmy.world 6 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

The simplest explanation for the size difference could be if you have a symlink in your home folder pointing outside it. Idk if rsync traverses symlinks and filesystems by default, i.e. goes into linked folders instead of just copying the link, but you might want to check that. Note also that exFAT doesn't support symlinks, dunno what rsync does in that case.

It would be useful to run ls -R >file.txt in both the source and target directories and diff the files to see if the directory structure changed. (The -l option would report many changes, since exFAT doesn't support Unix permissions either.) Apps like Double Commander can diff the directories visually (be sure to uncheck ‘compare by content’).

As others mentioned, if you have hardlinks in the source, they could be copied multiple times to the target, particularly since exFAT, again, doesn't have hardlinks. But the primary source of hardlinks in normal usage would probably be git, which employs them to compact its structures, and I doubt it that you have >300 GB of git repositories.

[–] bleistift2@sopuli.xyz 3 points 3 weeks ago (1 children)

Idk if rsync traverses symlinks and filesystems by default,

From the man page:

Beginning with rsync 3.0.0, rsync always sends these implied directories as real directories in the file list, even if a path element is really a symlink on the sending side. This prevents some really unexpected behaviors when copying the full path of a file that you didn't realize had a symlink in its path.

That means, if you’re transferring the file ~/foo/bar/file.txt, where ~/foo/bar/ is a symlink to ~/foo/baz, the baz directory will essentially be duplicated and end up as the real directory /SSD/foo/bar and /SSD/foo/baz.

[–] SlurpingPus@lemmy.world 1 points 3 weeks ago

Yeah, that would do it. If OP has such symlinks, they probably need to add an exception for rsync.

[–] Wildmimic@anarchist.nexus 3 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

A second possibility is the deduplication feature of BTRFS. If he made copies of files on his SSD, they only take up space there when changing something - thats how i keep 5 differently modded Cyberpunk 2077 installations on my drive while only taking up a fraction of space that would be needed - I wouldn't be able to copy this drive 1:1 onto a different filesystem.

[–] SlurpingPus@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago)

Ah, I knew the mention of btrfs heebied my jeebies a little, but forgot about the CoW thing.

I'm guessing some btrfs-specific utils are necessary to figure out how much it cow'ed.

[–] riskable@programming.dev 5 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Simple: Exfat does not support symbolic links. So every file that's just a symbolic link on your btrfs filesystem is getting copied in full (the link is being resolved) to your Exfat drive.

Solution: Don't use Exfat. For backups from btrfs, I recommend using btrfs with compression enabled.

Also don't forget to rebalance your btrfs partitions regularly to reclaim lost space! Also, delete old snapshots!

[–] sbeak@sopuli.xyz 2 points 3 weeks ago

That makes a lot of sense. I can't reformat the external SSD though, since it has a bunch of other files and needs to be used by my family (who are mostly Windows users)

[–] olosta@lemmy.world 4 points 3 weeks ago (2 children)

Maybe you have hard links or sparse files in your source directory. Try with -H for hard links first. You can try with --sparse but I think hard links are more likely.

[–] sbeak@sopuli.xyz 1 points 3 weeks ago (1 children)

Using -H throws an error as symlinks aren't supported in exFAT it seems.

[–] SlurpingPus@lemmy.world 3 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

By the way, do you have lots of torrents downloaded or large virtual machines installed? Both torrent clients and virtual machine managers use ‘sparse files’ to save space until you actually download the whole torrent or write a lot to the VM's disk. Those files would be copied at full un-sparse size to exFAT.

If you have folders with such content, you can use e.g. Double Commander to check the actual used size of those folders (with ctrl-L in Doublecmd). Idk which terminal utils might give you those numbers in place, but aforementioned ncdu can calculate them and present as a tree.

Edit: silly me, of course du is the util to use, typically as du -hsc dirname.

[–] sbeak@sopuli.xyz 1 points 3 weeks ago (1 children)

using du -hsc returns 384G with /home/sbird, and 150G inside the external SSD (when it does not have any of the files transferred with rsync)

load more comments (1 replies)
[–] SlurpingPus@lemmy.world 1 points 3 weeks ago

For a typical user, hard links would be mostly employed by git for its internal structures, and it's difficult to accumulate over 300 GB of git repos.

Sparse files would actually be more believable, since they're used by both torrent clients and virtual machines.

[–] confusedpuppy@lemmy.dbzer0.com 4 points 3 weeks ago (1 children)

There might be a possibility that recursion is happening and a directory is looping into itself and filling up your storage.

I have some suggestions for your command to help make a more consistent experience with rsync.

1: --dry-run (-n) is great for troubleshooting issues. It performs a fake transfer so you can sort issues before moving any data. Remove this option when you are confident about making changes.

2: --verbose --human-readable (-vh) will give you visual feedback so you can see what is happening. Combine this with --dry-run so you get a full picture of what rsync will attempt to do before any changes are made.

3: --compress (-z) might not be suitable for this specific job, as I understand, it's meant to compress data during a file transfer intended over a network. In your commands current state, it's just adding extra processing power which might not be useful for a connected device.

4: If you are transferring directories/folders, I found more consistent behaviour from rsync by adding a trailing slash at the end of a path. For example use "/home/username/folder_name/" and not "/home/username/folder_name". I've run into recursion issues by not using a trailing slash.

Don't use a trailing slash if you are transferring a single file. That distinction helps me to understand what I'm transferring too.

5: --delete will make sure your source folder and destination folder are a 1:1 match. Any files deleted in the source folder will be deleted in the destination folder. If you want to keep any and all added files in your destination folder, this option can be ignored.

--archive (-a) and --partial --progress (-P) are both good and don't need to be changed or removed.

If you do happen to be running into a recursion issue that's filling up your storage, you may need to look into using the --exclude option to exclude the problem folder.

[–] sbeak@sopuli.xyz 1 points 3 weeks ago (2 children)

How do I find which folder is causing problems? When using --verbose and --dry-run, it goes way too fast and the terminal can't see all of the history

[–] bleistift2@sopuli.xyz 4 points 3 weeks ago (1 children)

You can store the output of rsync in a file by using rsync ALL_THE_OPTIONS_YOU_USED > rsync-output.txt. This creates a file called rsync-output.txt in your current directory which you can inspect later.

This, however means that you won’t see the output right away. You can also use rsync ALL_THE_OPTIONS_YOU_USED | tee rsync-output.txt, which will both create the file and display the output on your terminal while it is being produced.

[–] sbeak@sopuli.xyz 2 points 3 weeks ago (2 children)

Having a quick scroll of the output file (neat tip with the > to get a text file, thanks!) nothing immediately jumps out to me. There isn't any repeated folders or anything like that from a glance. Anything I should look out for?

[–] bleistift2@sopuli.xyz 2 points 3 weeks ago (1 children)

You checked 385GiB of files by hand? Is that size made up by a few humongously large files?

I suggest using uniq to check if you have duplicate files in there. (uniq’s input must be sorted first). If you still have the output file from the previous step, and it’s called rsync-output.txt, do sort rsync-output.txt | uniq -dc. This will print the duplicates and the number of their occurrences.

[–] sbeak@sopuli.xyz 1 points 3 weeks ago (1 children)

when using uniq nothing is printed (I'm assuming that means no duplicates?)

[–] bleistift2@sopuli.xyz 2 points 3 weeks ago (2 children)

I’m sorry. I was stupid. If you had duplicates due to a file system loop or symlinks, they would all be under different names. So you wouldn’t be able to find them with this method.

[–] sbeak@sopuli.xyz 1 points 3 weeks ago

Ok then, that makes sense

[–] sbeak@sopuli.xyz 1 points 3 weeks ago (1 children)

running du command with --count-links as suggested by another user returns 384G (so that isn't the problem it seems)

[–] bleistift2@sopuli.xyz 1 points 3 weeks ago

du --count-links only counts hard-linked files multiple types. I assumed you had a symlink loop that rsync would have tried to unwrap.

For instance:

$ ls -l
foo -> ./bar
bar -> ./foo

If you tried to rsync that, you’d end up with the directories foo, bar, foo/bar, bar/foo, foo/bar/foo, bar/foo/bar, foo/bar/foo/bar, ad infinitum, in the target directory.

[–] confusedpuppy@lemmy.dbzer0.com 2 points 3 weeks ago

If you don't spot any recusion issues, I'd suggest looking for other issues and not spend too much time here. At least now you have some troubleshooting knowledge going forward. Best of luck figuring out the issue.

[–] confusedpuppy@lemmy.dbzer0.com 1 points 3 weeks ago

Does your terminal have a scroll back limit? You may need to change that setting if there is a limit.

That will depend on which terminal you are using and it may have a different name so I can't really help more with this specific issue. You'll have to search that up based on the terminal you are using.

[–] bleistift2@sopuli.xyz 3 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Could it be you have lots of tiny files and/or a rather large-ish block size on your SSD?

You can check the block size with sudo blockdev --getbsz /dev/$THE_DEVICE.

[–] sbeak@sopuli.xyz 2 points 3 weeks ago (2 children)

using the command returns 512 for the external SSD and 4096 for the SSD in my laptop. What does that mean?

[–] bleistift2@sopuli.xyz 2 points 3 weeks ago

What does that mean?

Imagine your hard drive like a giant cupboard of drawers. Each drawer can only have one label, so you must only ever store one “thing” in one drawer, otherwise you wouldn’t be able to label the thing accurately and end up not knowing what went where.

If you have giant drawers (a large block size), but only tiny things (small files) to store, you end up wasting a lot of space in the drawer. It could fit a desktop computer, but you’re only putting in a phone. This problem is called “internal fragmentation” and causes files to take up way more space than it would seem they need.

–––––

However, in your case, the target block size is actually smaller, so this is not the issue you’re facing.

[–] SlurpingPus@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago)

Means a file that's one byte in size will take at minimum 512 bytes on the external disk, but 4 KB on the internal one. If it were the other way around, that would partially explain the difference in space used.

In any case, I doubt it that the block sizes would make so much difference in typical usage.

[–] degenerate_neutron_matter@fedia.io 3 points 3 weeks ago (2 children)

BTRFS supports compression and deduplication, so the actual disk space used might be less than the total size of your home directory. I'd run du -sh --apparent-size /home/sbird to check how large your home dir actually is. If it's larger than 780 GiB, there's your problem. Otherwise there might be hardlinks which rsync is copying multiple times; add the -H flag to copy hardlinks as hardlinks.

[–] sbeak@sopuli.xyz 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

382G for /home/sbird (definitely not more than 780G) so that is strange. Using -H doesn't work since the external SSD is exFAT (which from a quick search doesn't support symlinks)

[–] degenerate_neutron_matter@fedia.io 2 points 3 weeks ago (1 children)

You can rerun the du command with --count-links to count hardlinks multiple times. If that shows >780GiB you have a lot of hardlinks somewhere, which you can narrow down by rerunning the command on each of the subdirectories in your home directory.

Your options would be to delete the hardlinks to decrease your total file size, exclude them from the rsync with --exclude, or repartition your SSD to a filesystem that supports hardlinks.

[–] sbeak@sopuli.xyz 1 points 3 weeks ago (1 children)

With --count-links, it is just 384G so that is probably not the issue?

That's odd, maybe it has to do with symlinks? Adding --dereference to the du command will count the file size of the files referenced by symlinks. If that doesn't show anything abnormal, I'd compare the directory sizes between your home directory and the rsync backup and try to find where they differ significantly. If it does show a much larger size, narrow down the location of the relevant symlinks (may be a hidden directory) and either delete them or exclude them from the rsync.

load more comments (1 replies)
[–] bleistift2@sopuli.xyz 2 points 3 weeks ago (3 children)

Personally, I have no more tips that those that have already been presented in this comment section. What I would do now to find out what’s going on is the age-old divide-and-conquer debugging technique:

Using rsync or a file manager (yours is Dolphin), only copy a few top-level directories at a time to your external drive. Note the directories you are about to move before each transfer. After each transfer check if the sizes of the directories on your internal drive (roughly) match those on your external drive (They will probably differ a little bit). You can also use your file manager for that.

If all went fine for the first batch, proceed to the next until you find one where the sizes differ significantly. Then delete that offending batch from the external drive. Divide the offending batch into smaller batches (select fewer directories if you tried transferring multiple; or descend into a single directory and copy its subdirectories piecewise like you did before).

In the end you should have a single directory or file which you have identified as problematic. That can then be investigated further.

[–] sbeak@sopuli.xyz 1 points 3 weeks ago (7 children)

Something interesting that I found: according to dolphin, many folders have many GB extra (e.g. 52GB vs 66GB for documents folder which is kind of crazy) while filelight records 52GB vs 112GB for documents folder, which if true, is kind of insane. Using du -sh records 53G vs 136G (they're the same when using --apparent-size, weird. Specifically for Godot directory, it's 3.8GB vs 41 GB!!!)!!! Files like videos and games seem to be about the same size, while Godot projects with git are much bigger. Weird.

[–] bleistift2@sopuli.xyz 2 points 3 weeks ago (1 children)

These differences really are insane. Maybe someone more knowledgeable can comment on why different tools differ so wildly in the total size they report.

I have never used BTRFS, so I must resort to forwarding googled results like this one.

Could you try compsize ~? If the Perc column is much lower than 100% or the Disk Usage column is much lower than the Uncompressed column, then you have some BTRFS-specific file-size reduction on your hands, which your external exFAT naturally can’t replicate.

load more comments (1 replies)
[–] bleistift2@sopuli.xyz 1 points 3 weeks ago (1 children)

It’s good you found some pathological examples, but I’m at the end of my rope here.

You can use these examples and the other information you gathered so far and ask specifically how these size discrepancies can be explained and maybe mitigated. I suggest more specialized communities for this such as !linux@lemmy.ml, !linux@programming.dev, !linux@lemmy.world, !linux4noobs@programming.dev, !linux4noobs@lemmy.world, !linuxquestions@lemmy.zip.

[–] sbeak@sopuli.xyz 2 points 3 weeks ago

I have cross posted to a Linux community. Thank you so much for all your help :DDDD

load more comments (5 replies)
[–] sbeak@sopuli.xyz 1 points 3 weeks ago

Oh that's actually a good idea. Thanks person! I will report back soon

load more comments (1 replies)
[–] bleistift2@sopuli.xyz 2 points 3 weeks ago (4 children)

Let’s back up and check your assumptions: How did you check that the disk usage of your home folder is 385GiB and that there are 780GiB of free disk space on your external drive?

[–] sbeak@sopuli.xyz 1 points 3 weeks ago (2 children)

Checking "properties" using Dolphin. Could that be incorrect?

[–] bleistift2@sopuli.xyz 1 points 3 weeks ago

I’d say you can trust that.

[–] Dirtboy@lemmy.world 1 points 3 weeks ago

Does that include hidden folders?

load more comments (3 replies)
[–] whyNotSquirrel@sh.itjust.works 1 points 3 weeks ago (1 children)

is it pointing to the good folder? I'm never sure if quotes are enough to escape "space" characters (I avoid those to limit troubles actually)

[–] bleistift2@sopuli.xyz 1 points 3 weeks ago

Spaces are enough to handle spaces. That file path is valid.

load more comments
view more: next ›