I have a weird problem with NFS and volumes right now, maybe somebody here knows what’s up
I’m trying to move my docker swarm to use volume mounts instead of bind mounts to a nfs share that is mounted on boot. As such, I started digging into the native support for this in docker (which isn’t super obvious at first). You can basically do the following in a yaml:
volumes: <name of volume>: driver: local driver_opts: type: "nfs" device: "<nfs address>:<nfs volume mount and path>"
Or create it thru the command line, same diff. One could also a
o in the
driver_opts to specify
addr= which allows you to add nfs mount options (think nolock, soft or hard). A quick
docker-compose up etc and I do see the volume mount, but trying to access it from the sonarr web interface to add a show and it says
User bla can't write to this location. So we fiddle with the nfs export for a tick to make sure things ok. My final export looks like this (on a synology).
I’m not too sure about the
anonuid stuff, but it’s generated from the synology web ui and has worked for a while now with fstab mount and direct binds, so i’m guessing that isn’t the problem. I’d like it to do
all_squash but it seems like docker swarm doesn’t like that because of some internal
chowning it does, so no joy there. Figured it might be something with sonarr image permissions, but the PUID and GUID are set. To make sure I’m not cray cray I do a quick
docker exec -it <complicated swarm name> bash and try to create a file on the share … and low and behold it works. Try sonarr again, still no go.
I’ve also been tinkering with trying some volume plugins. I settled on Rancher Convoy at first, but that had the same issues, then I tried to run Openstack Netshare which
does work. also does not work
I guess my question is, how should this work and why is this so oddly hard weird, and why are the built in solutions so obtuse and hard to discover? What am i missing here? Did I grab the wrong user from my hosts and is it not allowed to write to the share?