Martin Kletzander
2016-Apr-13 05:33 UTC
Re: [libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:>On 4/12/2016 5:08 PM, John Ferlan wrote: >> Having/using a root squash via an NFS pool is "easy" (famous last words) >> >> Create some pool XML (taking the example I have) >> >> % cat nfs.xml >> <pool type='netfs'> >> <name>rootsquash</name> >> <source> >> <host name='localhost'/> >> <dir path='/home/bzs/rootsquash/nfs'/> >> <format type='nfs'/> >> </source> >> <target> >> <path>/tmp/netfs-rootsquash-pool</path> >> <permissions> >> <mode>0755</mode> >> <owner>107</owner> >> <group>107</group> >> </permissions> >> </target> >> </pool> >> >> In this case 107:107 is qemu:qemu and I used 'localhost' as the >> hostname, but that can be a fqdn or ip-addr to the NFS server. >> >> You've already seen my /etc/exports >> >> virsh pool-define nfs.xml >> virsh pool-build rootsquash >> virsh pool-start rootsquash >> virsh vol-list rootsquash >> >> Now instead of >> >> <disk type='file' device='disk'> >> <source file='/var/lib/one//datastores/0/38/disk.0'/> >> <target dev='hda'/> >> <driver name='qemu' type='qcow2' cache='none'/> >> </disk> >> >> Something like: >> >> <disk type='volume' device='disk'> >> <driver name='qemu' type='qemu' cache='none'/> >> <source pool='rootsquash' volume='disk.0'/> >> <target dev='hda'/> >> </disk> >> >> The volume name may be off, but it's perhaps close. I forget how to do >> the readonly bit for a pool (again, my focus is elsewhere). >> >> Of course you'd have to adjust the nfs.xml above to suit your >> environment and see what you see/get. The privileges for the pool and >> volumes in the pool become the key to how libvirt decides to "request >> access" to the volume. "disk.1" having read access is probably not an >> issue since you seem to be using it as a CDROM; however, "disk.0" is >> going to be used for read/write - thus would have to be appropriately >> configured... >> > >Thanks John! Appreciated again. > >No worries, handle what's on the plate now and earmark this for checking >once you have some free cycles. I can temporarily hop on one leg by >using Martin Kletzander's workaround (It's a POC at the moment). > >I'll have a look at your instructions further but wanted to find out if >that config nfs.xml is a one time thing correct? I'm spinning these up >at will via the OpenNebula GUI and if I have update for each VM, that >breaks the Cloud provisioning. I'll go over your notes again. I'm >optimistic. :) >The more I'm thinking about it, the more I am convinced that the workaround is actually not a workaround. The only thing you need to do is having execute for others (precisely for 'nobody' on the nfs share) in the whole path on all directories. Without that even the pool won't be usable from libvirt. However it does not pose any security issue as it only allows others to check the path. When qemu is launched, it has the proper "label", meaning uid:gid to access the file so it will be able to read/write or whatever permissions you set there. It's just that libvirt does some checks that the path exists for example. Hope that's understandable and it will resolve your issue permanently. Have a nice day, Martin
TomK
2016-Apr-13 13:19 UTC
Re: [libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
On 4/13/2016 1:33 AM, Martin Kletzander wrote:> On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote: >> On 4/12/2016 5:08 PM, John Ferlan wrote: >>> Having/using a root squash via an NFS pool is "easy" (famous last >>> words) >>> >>> Create some pool XML (taking the example I have) >>> >>> % cat nfs.xml >>> <pool type='netfs'> >>> <name>rootsquash</name> >>> <source> >>> <host name='localhost'/> >>> <dir path='/home/bzs/rootsquash/nfs'/> >>> <format type='nfs'/> >>> </source> >>> <target> >>> <path>/tmp/netfs-rootsquash-pool</path> >>> <permissions> >>> <mode>0755</mode> >>> <owner>107</owner> >>> <group>107</group> >>> </permissions> >>> </target> >>> </pool> >>> >>> In this case 107:107 is qemu:qemu and I used 'localhost' as the >>> hostname, but that can be a fqdn or ip-addr to the NFS server. >>> >>> You've already seen my /etc/exports >>> >>> virsh pool-define nfs.xml >>> virsh pool-build rootsquash >>> virsh pool-start rootsquash >>> virsh vol-list rootsquash >>> >>> Now instead of >>> >>> <disk type='file' device='disk'> >>> <source file='/var/lib/one//datastores/0/38/disk.0'/> >>> <target dev='hda'/> >>> <driver name='qemu' type='qcow2' cache='none'/> >>> </disk> >>> >>> Something like: >>> >>> <disk type='volume' device='disk'> >>> <driver name='qemu' type='qemu' cache='none'/> >>> <source pool='rootsquash' volume='disk.0'/> >>> <target dev='hda'/> >>> </disk> >>> >>> The volume name may be off, but it's perhaps close. I forget how to do >>> the readonly bit for a pool (again, my focus is elsewhere). >>> >>> Of course you'd have to adjust the nfs.xml above to suit your >>> environment and see what you see/get. The privileges for the pool and >>> volumes in the pool become the key to how libvirt decides to "request >>> access" to the volume. "disk.1" having read access is probably not an >>> issue since you seem to be using it as a CDROM; however, "disk.0" is >>> going to be used for read/write - thus would have to be appropriately >>> configured... >>> >> >> Thanks John! Appreciated again. >> >> No worries, handle what's on the plate now and earmark this for checking >> once you have some free cycles. I can temporarily hop on one leg by >> using Martin Kletzander's workaround (It's a POC at the moment). >> >> I'll have a look at your instructions further but wanted to find out if >> that config nfs.xml is a one time thing correct? I'm spinning these up >> at will via the OpenNebula GUI and if I have update for each VM, that >> breaks the Cloud provisioning. I'll go over your notes again. I'm >> optimistic. :) >> > > The more I'm thinking about it, the more I am convinced that the > workaround is actually not a workaround. The only thing you need to do > is having execute for others (precisely for 'nobody' on the nfs share) > in the whole path on all directories. Without that even the pool won't > be usable from libvirt. However it does not pose any security issue as > it only allows others to check the path. When qemu is launched, it has > the proper "label", meaning uid:gid to access the file so it will be > able to read/write or whatever permissions you set there. It's just > that libvirt does some checks that the path exists for example. > > Hope that's understandable and it will resolve your issue permanently. > > Have a nice day, > MartinThat fits in with what's happening for sure. I'm just not sure how much of the work libvirtd does it does by root vs nobody vs oneadmin on the NFS mount. If there was a way to find out that, it would help alot. I will give the nobody user setting a try however. Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun.
TomK
2016-Apr-13 13:23 UTC
Re: [libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
John Ferlan
2016-Apr-13 14:00 UTC
Re: [libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
On 04/13/2016 09:23 AM, TomK wrote:> On 4/13/2016 1:33 AM, Martin Kletzander wrote: >> On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote: >>> On 4/12/2016 5:08 PM, John Ferlan wrote: >>>> Having/using a root squash via an NFS pool is "easy" (famous last >>>> words) >>>> >>>> Create some pool XML (taking the example I have) >>>> >>>> % cat nfs.xml >>>> <pool type='netfs'> >>>> <name>rootsquash</name> >>>> <source> >>>> <host name='localhost'/> >>>> <dir path='/home/bzs/rootsquash/nfs'/> >>>> <format type='nfs'/> >>>> </source> >>>> <target> >>>> <path>/tmp/netfs-rootsquash-pool</path> >>>> <permissions> >>>> <mode>0755</mode> >>>> <owner>107</owner> >>>> <group>107</group> >>>> </permissions> >>>> </target> >>>> </pool> >>>> >>>> In this case 107:107 is qemu:qemu and I used 'localhost' as the >>>> hostname, but that can be a fqdn or ip-addr to the NFS server. >>>> >>>> You've already seen my /etc/exports >>>> >>>> virsh pool-define nfs.xml >>>> virsh pool-build rootsquash >>>> virsh pool-start rootsquash >>>> virsh vol-list rootsquash >>>> >>>> Now instead of >>>> >>>> <disk type='file' device='disk'> >>>> <source file='/var/lib/one//datastores/0/38/disk.0'/> >>>> <target dev='hda'/> >>>> <driver name='qemu' type='qcow2' cache='none'/> >>>> </disk> >>>> >>>> Something like: >>>> >>>> <disk type='volume' device='disk'> >>>> <driver name='qemu' type='qemu' cache='none'/> >>>> <source pool='rootsquash' volume='disk.0'/> >>>> <target dev='hda'/> >>>> </disk> >>>> >>>> The volume name may be off, but it's perhaps close. I forget how to do >>>> the readonly bit for a pool (again, my focus is elsewhere). >>>> >>>> Of course you'd have to adjust the nfs.xml above to suit your >>>> environment and see what you see/get. The privileges for the pool and >>>> volumes in the pool become the key to how libvirt decides to "request >>>> access" to the volume. "disk.1" having read access is probably not an >>>> issue since you seem to be using it as a CDROM; however, "disk.0" is >>>> going to be used for read/write - thus would have to be appropriately >>>> configured... >>>> >>> >>> Thanks John! Appreciated again. >>> >>> No worries, handle what's on the plate now and earmark this for checking >>> once you have some free cycles. I can temporarily hop on one leg by >>> using Martin Kletzander's workaround (It's a POC at the moment). >>> >>> I'll have a look at your instructions further but wanted to find out if >>> that config nfs.xml is a one time thing correct? I'm spinning these up >>> at will via the OpenNebula GUI and if I have update for each VM, that >>> breaks the Cloud provisioning. I'll go over your notes again. I'm >>> optimistic. :) >>> >> >> The more I'm thinking about it, the more I am convinced that the >> workaround is actually not a workaround. The only thing you need to do >> is having execute for others (precisely for 'nobody' on the nfs share) >> in the whole path on all directories. Without that even the pool won't >> be usable from libvirt. However it does not pose any security issue as >> it only allows others to check the path. When qemu is launched, it has >> the proper "label", meaning uid:gid to access the file so it will be >> able to read/write or whatever permissions you set there. It's just >> that libvirt does some checks that the path exists for example. >> >> Hope that's understandable and it will resolve your issue permanently. >> >> Have a nice day, >> Martin >> >> >> _______________________________________________ >> libvirt-users mailing list >> libvirt-users@redhat.com >> https://www.redhat.com/mailman/listinfo/libvirt-users > > The only reason I said that this might be a 'workaround' is due to John > Farlan commenting that he'll look at this later on. Ideally the > opennebula community keeps the other permissions to nill and presumably > they work on NFSv3 per the forum topic I included earlier from them. > But if setting the permissions on nobody to allow for the functionality, > I would be comfortable with that. >Martin and I were taking different paths... But yes, it certainly makes sense given your error message about canonical path and the need for eXecute permissions... I think I started wondering about that first, but then jumped into the NFS pool because that's what my reference point is for root-squash. Since root squash essentially sends root requests as "nfsnobody" (IOW: others not the user or group), then the "o+x" approach is the solution if you're going directly at the file. John
Possibly Parallel Threads
- Re: [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
- Re: [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
- Re: [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
- Re: Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
- Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path