this post was submitted on 16 May 2026
11 points (92.3% liked)

Selfhosted

50711 readers
748 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

https://kb.synology.com/en-global/DSM/tutorial/Docker_container_cant_access_the_folder_or_file#x_anchor_idcd3f1170a3

Why allow "everyone" to have read write permission to shared folders in order to run container manager? Wouldn't this be insecure?

top 4 comments
sorted by: hot top controversial new old
[–] anamethatisnt@sopuli.xyz 2 points 59 minutes ago

I mean unless specified otherwise most Synology container management dockers will run as root. With that said, if you want to secure things then there are guides.

An alternative path would be to setup a specific docker user and use docker compose to use that user when installing images
https://drfrankenstein.co.uk/step-2-setting-up-a-restricted-docker-user-and-obtaining-ids/

Jellyfin example
https://drfrankenstein.co.uk/jellyfin-in-container-manager-on-a-synology-nas-hardware-transcoding/

From there you could go further and use the guides above to create one user per docker image and give them different permissions depending on need.

[–] non_burglar@lemmy.world 1 points 1 hour ago

That seems to be what Synology is suggesting, and you're right, this wouldn't be the best configuration if security is the goal.

[–] pulsewidth@lemmy.world 3 points 54 minutes ago* (last edited 52 minutes ago) (1 children)

Its not as egregious as you think. 'Everyone' group means every Synology user account - not that everyone on the network that can talk to the NAS, they'd still need both a Synology account and Shared folder permissions. Any Synology user trying to access those files would still have to have read and write access to the Share to actually access it (eg via file explorer SMB/CIFs or app-level access to Synology File Manager, or they would need to be granted SSH access to get in via terminal, etc) in order to R/w/m the files.

I know it's a bit confusing, but it's correct. Docker often causes confusion with file permissions. There are file-level permissions (this article) and there are share-level permissions. You need both to access folders and files via mapped drives / SMB, this setting is just to ensure that Docker containers which can be running as a variety of user names (depending on how you config docker and the container) don't experience issues accessing files you're expecting them to be able to access, as Synology says, the default Docker folder permission is for the 'everyone' group to have Read-only access. This should allow most Docker containers configs to at least run and then if you run into issues writing/modifying files.. That's a clue you have missed some file permission configuration settings that need to be done, and the only reason it's running at all is because that default 'everyone' permission is saving your butt.

[–] anamethatisnt@sopuli.xyz 1 points 1 minute ago

The main thing I see you can avoid with locking down the docker images into a separate low user that can only access what they really need is if someone successfully attacks a project and you get infected with some shit when your Synology pulls image:latest.
It could limit the traversal of a ransomware that successfully breaks free of the container but ends up having no permissions outside as an example.
I would probably purge the whole NAS and setup from my backup for my own peace of mind even with the user separation though.