this post was submitted on 11 Oct 2025
43 points (97.8% liked)

Selfhosted

56953 readers
549 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Do you guys have any success with setting up an arr stack with rootless Podman Quadlets? I really like the idea of Quadlets, but I can't make it work.

Any guide and/or experience sharing would be greatly appreciated.

I have set up a Rocky Linux 10 with Podman 5.4.2 but after downloading the containers the quadlets were crashing.

Shall I continue digging this rabbit hole or shall I switch back to Docker Compose?

you are viewing a single comment's thread
view the rest of the comments
[–] Eldaroth@lemmy.world 7 points 5 months ago* (last edited 5 months ago) (3 children)

Nice, did the move from docker to podman a couple of months ago myself. Now running the arr stack, nextcloud, immich and some other services as quadlets. File permission due to podmans rootless nature usually was the culprit if something was not working properly.

I can share my quadlet systemd files I use for the arr stack. I deployed it as a pod:

[Unit]
Description=Arr-stack pod

[Pod]
PodName=arr-stack
# Jellyseerr Port Mapping
PublishPort=8055:5055
# Sonarr Port Mapping
PublishPort=8089:8989
# Radarr Port Mapping
PublishPort=8078:7878
# Prowlarr Port Mapping
PublishPort=8096:9696
# Flaresolverr Port Mapping
PublishPort=8091:8191
# qBittorrent Port Mapping
PublishPort=8080:8080
***
[Unit]
Description=Gluetun Container

[Container]
ContainerName=gluetun
EnvironmentFile=global.env
EnvironmentFile=gluetun.env
Environment=FIREWALL_INPUT_PORTS=8080
Image=docker.io/qmcgaw/gluetun:v3.40.0
Pod=arr-stack.pod
AutoUpdate=registry
PodmanArgs=--privileged
AddCapability=NET_ADMIN
AddDevice=/dev/net/tun:/dev/net/tun

Volume=%h/container_volumes/gluetun/conf:/gluetun:Z,U

Secret=openvpn_user,type=env,target=OPENVPN_USER
Secret=openvpn_password,type=env,target=OPENVPN_PASSWORD

[Service]
Restart=always

[Install]
WantedBy=default.target
***
[Unit]
Description=qBittorrent Container
Requires=gluetun.service
After=gluetun.service

[Container]
ContainerName=qbittorrent
EnvironmentFile=global.env
Environment=WEBUI_PORT=8080
Image=lscr.io/linuxserver/qbittorrent:5.1.2
AutoUpdate=registry
UserNS=keep-id:uid=1000,gid=1000
Pod=arr-stack.pod
Network=container:gluetun

Volume=%h/container_volumes/qbittorrent/conf:/config:Z,U
Volume=%h/Downloads/completed:/downloads:z,U
Volume=%h/Downloads/incomplete:/incomplete:z,U
Volume=%h/Downloads/torrents:/torrents:z,U

[Service]
Restart=always

[Install]
WantedBy=default.target
***
[Unit]
Description=Prowlarr Container
Requires=gluetun.service
After=gluetun.service

[Container]
ContainerName=prowlarr
EnvironmentFile=global.env
Image=lscr.io/linuxserver/prowlarr:2.0.5
AutoUpdate=registry
UserNS=keep-id:uid=1000,gid=1000
Pod=arr-stack.pod
Network=container:gluetun

HealthCmd=["curl","--fail","http://127.0.0.1:9696/prowlarr/ping"]
HealthInterval=30s
HealthRetries=10

Volume=%h/container_volumes/prowlarr/conf:/config:Z,U

[Service]
Restart=always

[Install]
WantedBy=default.target
***
[Unit]
Description=Flaresolverr Container

[Container]
ContainerName=flaresolverr
EnvironmentFile=global.env
Image=ghcr.io/flaresolverr/flaresolverr:v3.4.0
AutoUpdate=registry
Pod=arr-stack.pod
Network=container:gluetun

[Service]
Restart=always

[Install]
WantedBy=default.target
***
[Unit]
Description=Radarr Container

[Container]
ContainerName=radarr
EnvironmentFile=global.env
Image=lscr.io/linuxserver/radarr:5.27.5
AutoUpdate=registry
UserNS=keep-id:uid=1000,gid=1000
Pod=arr-stack.pod
Network=container:gluetun

HealthCmd=["curl","--fail","http://127.0.0.1:7878/radarr/ping"]
HealthInterval=30s
HealthRetries=10

# Disable SecurityLabels due to SMB share
SecurityLabelDisable=true
Volume=%h/container_volumes/radarr/conf:/config:Z,U
Volume=/mnt/movies:/movies
Volume=%h/Downloads/completed/radarr:/downloads:z,U

[Service]
Restart=always

[Install]
WantedBy=default.target
***
[Unit]
Description=Sonarr Container

[Container]
ContainerName=sonarr
EnvironmentFile=global.env
Image=lscr.io/linuxserver/sonarr:4.0.15
AutoUpdate=registry
UserNS=keep-id:uid=1000,gid=1000
Pod=arr-stack.pod
Network=container:gluetun

HealthCmd=["curl","--fail","http://127.0.0.1:8989/sonarr/ping"]
HealthInterval=30s
HealthRetries=10

# Disable SecurityLabels due to SMB share
SecurityLabelDisable=true
Volume=%h/container_volumes/sonarr/conf:/config:Z,U
Volume=/mnt/tv:/tv
Volume=%h/Downloads/completed/sonarr:/downloads:z,U

[Service]
Restart=always

[Install]
WantedBy=default.target
***
[Unit]
Description=Jellyseerr Container

[Container]
ContainerName=jellyseerr
EnvironmentFile=global.env
Image=docker.io/fallenbagel/jellyseerr:2.7.3
AutoUpdate=registry
Pod=arr-stack.pod
Network=container:gluetun

Volume=%h/container_volumes/jellyseerr/conf:/app/config:Z,U

[Service]
Restart=always

[Install]
WantedBy=default.target

I run my podman containers in a VM running Alma Linux. Works pretty great so far.

Had the same issue when debugging systemctl errors, journalctl not being very helpful. At one point I just ran podman logs -f <container> in another terminal in a while loop just to catch the logs of the application. Not the most sophisticated approach, but it works πŸ˜„

[–] thenorthernmist@lemmy.world 2 points 5 months ago

This is nice, makes me inspired to set up my stack with podman again!

[–] filister@lemmy.world 1 points 5 months ago (1 children)

Nice, thanks for sharing. How did you solve the file permission issue?

Also I see you put all your services as a single pod quadlet what I am trying to achieve is to have every service as a separate systemd unit file, that I can control separately. In this case you also have a complication with the network setup.

[–] Eldaroth@lemmy.world 4 points 5 months ago

That's where UserNS=keep-id:uid=1000,gid=1000 comes into play. It "maps" the containers' user to your local user on the host to some extent, there is a deeper explanation of what exactly it does in this GitHub issue: https://github.com/containers/podman/issues/24934

Well the pod only links the container together, it's not one systemd file. Every container has its own file, so does the pod and the network (separated by '---' in my code block above). You still can start and stop each container as a service separately or just the whole pod with all containers linked to it. Pods have the advantage that the containers in them can talk to each other more easily.

The network I just created to separate my services from each other. Thinking of it, this was the old setup, as I started using gluetun and run it as a privileged container, it's using the host network anyway. I edited my post above and removed the network unit file.

[–] iLStrix@lemmy.world 1 points 2 months ago (1 children)

Hey, idk if you have a solution for me, but UserNS is not allowed to run together with Pod anymore. Since there is so insanely little information on quadlets, I'm having a hard time starting this up. Did you update yet and found a solution to the problem? (I'm new to podman, at least I got jellyfin somewhat running haha)

[–] Eldaroth@lemmy.world 2 points 2 months ago

Yeah faced that issue a couple of weeks ago as well after updating Podman. It didn't allow me to set container individual UID/GID mappings or UserNS when running in a pod, so I just took them out of the pod as I couldn't be bothered and run them as separate containers in the same network. Works just as good.

You just have to make sure to move the PublishPort block from the pod quadlet to the gluetun container (for all the containers which route their traffic through gluetun, i.e. which have 'Network=container: gluetun' set). This should solve the problem and still allows you to use UserNS or UID/GID mappings on the containers. No disadvantages so far, you just lose the convenience of stopping/starting all the containers at once through the pod. But I'd rather take this 'inconvenience' than troubleshooting for days how to make it work with a pod again.