this post was submitted on 24 Mar 2025
29 points (96.8% liked)

Selfhosted

44984 readers
428 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm hosting a few services using docker. For something like an openstreetmap tileserver, I'd like it to remain on my SSD because high speed improves performance, and the directory is unlikely to grow and fill the drive.

For other services like NextCloud, speed isn't as important as storage size, so I might want it on a larger HDD raid.

I know it's trivial to move the volumes directory to wherever, but can I move some volumes to one directory and some volumes to another?

top 18 comments
sorted by: hot top controversial new old
[–] Dave@lemmy.nz 19 points 1 day ago (3 children)

I don't know if this is naughty but I use bind mounts for everything, and docker compose to keep it all together.

You can map directories or even individual files to directories/files on the host computer.

Normally I make a directory for the service then map all volumes inside a ./data directory or something like that. But you could easily bind to different directories. For example for photoprism I mount my photos from a data drive for it to access, mount the main data/database to a directory that gets backed up, and mount the cache to a directory that doesn't get backed up.

[–] suicidaleggroll@lemm.ee 15 points 1 day ago* (last edited 1 day ago) (1 children)

Same, I don't let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.

It also means you can go hog wild with docker system prune -af --volumes and there's no risk of losing any of your data.

[–] Dave@lemmy.nz 4 points 1 day ago

Yes that's what I do too!

Overnight cron to stop containers, run borgmatic, then start the containers again.

[–] catloaf@lemm.ee 9 points 1 day ago (1 children)

I've never not used bind mounts. That data is persistent. Nonpersistent data is fine on docker volumes.

[–] Dave@lemmy.nz 14 points 1 day ago (1 children)

Docker wants you to use volumes. That data is persistent too. They say volumes are much easier to backup. I disagree, I much prefer the bind mounts, especially when it comes to selective backups.

Volumes are horrible, how would I easily edit a config file of the programm running inside, if the container deosnt even start.

Bind mounts + ZFS datasets are the way to go.

[–] towerful@programming.dev 4 points 1 day ago (1 children)

I do that, until some container has permissions issues.
I tinker, try and fix it, give up and use a volume. Or I fix it, but it never seems to be the same fix

[–] Dave@lemmy.nz 9 points 1 day ago

I occasionally have had permissions issues but I tend to be able to fix them. Normally it's just a matter of deleting the files on the host and letting the container create them, though it doesn't always work it usually does.

[–] mhzawadi@lemmy.horwood.cloud 6 points 1 day ago (1 children)

If you use a volume, you can mount that anywhere.

volumes:
  lemmy_pgsql:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: '/mnt/data/lemmy/pgsql'

Then in your service add a volume

    volumes:
      - lemmy_pgsql:/var/lib/postgresql/data:Z
[–] ikidd@lemmy.world 3 points 22 hours ago (1 children)

Is there any advantage to bind mounting that way? I've only ever done it by specifying the path directly in the container, usually ./data:data or some such. Never had a problem with it.

[–] mhzawadi@lemmy.horwood.cloud 2 points 20 hours ago (1 children)
[–] ikidd@lemmy.world 2 points 18 hours ago (1 children)

Well, I know you can define volumes for other filesystem drivers, but with bind mounts, you don't need to define the bind mount as you do, you can just specify the path directly in the container volumes and it will bind mount it. I was just wondering if there was any actual benefit to defining the volume manually over the simple way.

[–] mhzawadi@lemmy.horwood.cloud 3 points 16 hours ago (1 children)

In my case I need to use a named volume for docker swarm, also I can reuse a named volume in other services. If your not using swarm then just a bind mount should be fine

[–] ikidd@lemmy.world 1 points 16 hours ago

OK, yah, that's good point about swarms. I've generally not used any swarmed filesystem stuff where I needed persistence, just shared databases, so it hasn't come up.

No idea. I personally use PVs and PVCs with k3s and it's trivial there with some downtime

[–] e0qdk@reddthat.com 8 points 1 day ago

You can run docker containers with multiple volumes. e.g. pass something like -v src1:dst1 -v src2:dst2 as arguments to docker run.

So -- if I understood your question correctly -- yes, you can do that.

[–] lka1988@sh.itjust.works 5 points 1 day ago

I have several NFS shares that host multiple docker volumes. So yes.

[–] astrsk@fedia.io 1 points 1 day ago

This is mostly an IOPS dependent answer. Do you have multiple hot services constantly hitting the disk? If so, it can be advantageous to split the heavy hitters across different disk controllers, so in high redundancy situations that means different dedicated pools. If it’s a bunch of services just reading, filesystems like ZFS use caching to almost completely eliminate disk thrashing.