Martin Kletzander
2016-Apr-12 19:40 UTC
Re: [libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
[ I would be way easier to reply if you didn't top-post ] On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:>Hey John, > >Hehe, I got the right guy then. Very nice! And very good ideas but I >may need more time to reread and try them out later tonight. I'm fully >in agreement about providing more details. Can't be accurate in a >diagnosis if there isn't much data to go on. This pool option is new to >me. Please tell me more on it. Can't find it in the file below but >maybe it's elsewhere? > >( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> ) > > >Allright, here's the details: > >[root@mdskvm-p01 ~]# rpm -aq|grep -i libvir >libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64 >libvirt-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64 >libvirt-client-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64 >libvirt-python-1.2.17-2.el7.x86_64 >libvirt-glib-0.1.9-1.el7.x86_64 >libvirt-daemon-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64 >libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64 >[root@mdskvm-p01 ~]# cat /etc/release >cat: /etc/release: No such file or directory >[root@mdskvm-p01 ~]# cat /etc/*release* >NAME="Scientific Linux" >VERSION="7.2 (Nitrogen)" >ID="rhel" >ID_LIKE="fedora" >VERSION_ID="7.2" >PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" >ANSI_COLOR="0;31" >CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" >HOME_URL="http://www.scientificlinux.org//" >BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov" > >REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" >REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 >REDHAT_SUPPORT_PRODUCT="Scientific Linux" >REDHAT_SUPPORT_PRODUCT_VERSION="7.2" >Scientific Linux release 7.2 (Nitrogen) >Scientific Linux release 7.2 (Nitrogen) >Scientific Linux release 7.2 (Nitrogen) >cpe:/o:scientificlinux:scientificlinux:7.2:ga >[root@mdskvm-p01 ~]# > >[root@mdskvm-p01 ~]# mount /var/lib/one >[root@mdskvm-p01 ~]# su - oneadmin >Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0 >Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on >ssh:notty >There were 9584 failed login attempts since the last successful login. >i[oneadmin@mdskvm-p01 ~]$ id oneadmin >uid=9869(oneadmin) gid=9869(oneadmin) >groups=9869(oneadmin),992(libvirt),36(kvm) >[oneadmin@mdskvm-p01 ~]$ pwd >/var/lib/one >[oneadmin@mdskvm-p01 ~]$ ls -altriR|grep -i root >134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 .. >[oneadmin@mdskvm-p01 ~]$ > > > >[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0 ><domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> > <name>one-38</name> > <vcpu>1</vcpu> > <cputune> > <shares>1024</shares> > </cputune> > <memory>524288</memory> > <os> > <type arch='x86_64'>hvm</type> > <boot dev='hd'/> > </os> > <devices> ><emulator>/usr/libexec/qemu-kvm</emulator> > <disk type='file' device='disk'> > <source >file='/var/lib/one//datastores/0/38/disk.0'/> > <target dev='hda'/> > <driver name='qemu' type='qcow2' cache='none'/> > </disk> > <disk type='file' device='cdrom'> > <source >file='/var/lib/one//datastores/0/38/disk.1'/> > <target dev='hdb'/> > <readonly/> > <driver name='qemu' type='raw'/> > </disk> > <interface type='bridge'> > <source bridge='br0'/> > <mac address='02:00:c0:a8:00:64'/> > </interface> > <graphics type='vnc' listen='0.0.0.0' port='5938'/> > </devices> > <features> > <acpi/> > </features> ></domain> > >[oneadmin@mdskvm-p01 ~]$ cat >/var/lib/one//datastores/0/38/deployment.0|grep -i nfs >[oneadmin@mdskvm-p01 ~]$ > > > >Cheers, >Tom K. >------------------------------------------------------------------------------------- > >Living on earth is expensive, but it includes a free trip around the sun. > >On 4/12/2016 11:45 AM, John Ferlan wrote: >> >> On 04/12/2016 10:58 AM, TomK wrote: >>> Hey Martin, >>> >>> Thanks very much. Appreciate you jumping in on this thread. >> Can you provide some more details with respect to which libvirt version >> you have installed. I know I've made changes in this space in more >> recent versions (not the most recent). I'm no root_squash expert, but I >> was the last to change things in the space so that makes me partially >> fluent ;-) in NFS/root_squash speak. >>I'm always lost in how do we handle *all* the corner cases that are not even used anywhere at all, but care about the conditions we have in the code. Especially when it's constantly changing. So thanks for jumping in. I only replied because nobody else did and I had only the tiniest clue as to what could happen.>> Using root_squash is very "finicky" (to say the least)... It wasn't >> really clear from what you posted how you are attempting to reference >> things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file >> use a direct path to the NFS volume or does it use a pool? If a pool, >> then what type of pool? It is beneficial to provide as many details as >> possible about the configuration because (so to speak) those that are >> helping you won't know your environment (I've never used OpenNebula) nor >> do I have a 'oneadmin' uid:gid. >> >> What got my attention was the error message "initializing FS storage >> file" with the "file:" prefix to the name and 9869:9869 as the uid:gid >> trying to access the file (I assume that's oneadmin:oneadmin on your >> system). >>I totally missed this. So the only thing that popped on my mind now was checking the whole path: ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}} You can also run it as root and oneadmin, however after reading through all the info again, I don't think that'll help.
TomK
2016-Apr-12 19:55 UTC
Re: [libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
Martin Kletzander
2016-Apr-12 20:29 UTC
Re: [libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
On Tue, Apr 12, 2016 at 03:55:45PM -0400, TomK wrote:>On 4/12/2016 3:40 PM, Martin Kletzander wrote: >> [ I would be way easier to reply if you didn't top-post ] >> >> On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote: >>> On 4/12/2016 11:45 AM, John Ferlan wrote: >>>> What got my attention was the error message "initializing FS storage >>>> file" with the "file:" prefix to the name and 9869:9869 as the uid:gid >>>> trying to access the file (I assume that's oneadmin:oneadmin on your >>>> system). >>>> >> >> I totally missed this. So the only thing that popped on my mind now was >> checking the whole path: >> >> ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}} >> >> You can also run it as root and oneadmin, however after reading through >> all the info again, I don't think that'll help. >> >I top post by default in thunderbird and we have same setup at work with >M$ LookOut. Old habits are to blame I guess. I'll try to reply like >this instead. But yeah it's terrible for mailing lists to top post. >Here's the output and thanks again: > >[oneadmin@mdskvm-p01 ~]$ ls -ld >/var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}} >drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var >drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib >drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/oneLook ^^, maybe for a quick workaround you could try doing: chmod o+rx /var/lib/one Let me know if that does the trick (at least for now).>drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores >drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20 >/var/lib/one/datastores/0 >drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20 >/var/lib/one/datastores/0/38 >-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 >/var/lib/one/datastores/0/38/disk.1 >[oneadmin@mdskvm-p01 ~]$ > >That's the default setting but I think I see what you're getting at that >permissions get inherited? >No, I just think you need eXecute on all parent directories. That shouldn't hinder your security and could help.>Cheers, >Tom K. >------------------------------------------------------------------------------------- > > >Living on earth is expensive, but it includes a free trip around the sun. >
John Ferlan
2016-Apr-12 21:08 UTC
Re: [libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
On 04/12/2016 03:55 PM, TomK wrote:> > On 4/12/2016 3:40 PM, Martin Kletzander wrote: >> [ I would be way easier to reply if you didn't top-post ] >> >> On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote: >>> Hey John, >>> >>> Hehe, I got the right guy then. Very nice! And very good ideas but I >>> may need more time to reread and try them out later tonight. I'm fully >>> in agreement about providing more details. Can't be accurate in a >>> diagnosis if there isn't much data to go on. This pool option is new to >>> me. Please tell me more on it. Can't find it in the file below but >>> maybe it's elsewhere? >>> >>> ( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool >>> type="netfs"> ) >>> >>> >>> Allright, here's the details: >>> >>> [root@mdskvm-p01 ~]# rpm -aq|grep -i libvir >>> libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64 >>> libvirt-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64 >>> libvirt-client-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64 >>> libvirt-python-1.2.17-2.el7.x86_64 >>> libvirt-glib-0.1.9-1.el7.x86_64 >>> libvirt-daemon-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64 >>> libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64 >>> [root@mdskvm-p01 ~]# cat /etc/release >>> cat: /etc/release: No such file or directory >>> [root@mdskvm-p01 ~]# cat /etc/*release* >>> NAME="Scientific Linux" >>> VERSION="7.2 (Nitrogen)" >>> ID="rhel" >>> ID_LIKE="fedora" >>> VERSION_ID="7.2" >>> PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" >>> ANSI_COLOR="0;31" >>> CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" >>> HOME_URL="http://www.scientificlinux.org//" >>> BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov" >>> >>> REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" >>> REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 >>> REDHAT_SUPPORT_PRODUCT="Scientific Linux" >>> REDHAT_SUPPORT_PRODUCT_VERSION="7.2" >>> Scientific Linux release 7.2 (Nitrogen) >>> Scientific Linux release 7.2 (Nitrogen) >>> Scientific Linux release 7.2 (Nitrogen) >>> cpe:/o:scientificlinux:scientificlinux:7.2:ga >>> [root@mdskvm-p01 ~]# >>> >>> [root@mdskvm-p01 ~]# mount /var/lib/one >>> [root@mdskvm-p01 ~]# su - oneadmin >>> Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0 >>> Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on >>> ssh:notty >>> There were 9584 failed login attempts since the last successful login. >>> i[oneadmin@mdskvm-p01 ~]$ id oneadmin >>> uid=9869(oneadmin) gid=9869(oneadmin) >>> groups=9869(oneadmin),992(libvirt),36(kvm) >>> [oneadmin@mdskvm-p01 ~]$ pwd >>> /var/lib/one >>> [oneadmin@mdskvm-p01 ~]$ ls -altriR|grep -i root >>> 134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 .. >>> [oneadmin@mdskvm-p01 ~]$ >>> >>>It'd take more time than I have at the present moment to root out what changed when for NFS root-squash, but suffice to say there were some corner cases. Some involving how qemu-img files are generated - I don't have the details present in my short term memory.>>> >>> [oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0 >>> <domain type='kvm' >>> xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> >>> <name>one-38</name> >>> <vcpu>1</vcpu> >>> <cputune> >>> <shares>1024</shares> >>> </cputune> >>> <memory>524288</memory> >>> <os> >>> <type arch='x86_64'>hvm</type> >>> <boot dev='hd'/> >>> </os> >>> <devices> >>> <emulator>/usr/libexec/qemu-kvm</emulator> >>> <disk type='file' device='disk'> >>> <source >>> file='/var/lib/one//datastores/0/38/disk.0'/> >>> <target dev='hda'/> >>> <driver name='qemu' type='qcow2' cache='none'/> >>> </disk> >>> <disk type='file' device='cdrom'> >>> <source >>> file='/var/lib/one//datastores/0/38/disk.1'/> >>> <target dev='hdb'/> >>> <readonly/> >>> <driver name='qemu' type='raw'/> >>> </disk> >>> <interface type='bridge'> >>> <source bridge='br0'/> >>> <mac address='02:00:c0:a8:00:64'/> >>> </interface> >>> <graphics type='vnc' listen='0.0.0.0' port='5938'/> >>> </devices> >>> <features> >>> <acpi/> >>> </features> >>> </domain> >>> >>> [oneadmin@mdskvm-p01 ~]$ cat >>> /var/lib/one//datastores/0/38/deployment.0|grep -i nfs >>> [oneadmin@mdskvm-p01 ~]$ >>>Having/using a root squash via an NFS pool is "easy" (famous last words) Create some pool XML (taking the example I have) % cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool> In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server. You've already seen my /etc/exports virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash Now instead of <disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk> Something like: <disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk> The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere). Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...>>> >>> >>> Cheers, >>> Tom K. >>> ------------------------------------------------------------------------------------- >>> >>> >>> Living on earth is expensive, but it includes a free trip around the >>> sun. >>> >>> On 4/12/2016 11:45 AM, John Ferlan wrote: >>>> >>>> On 04/12/2016 10:58 AM, TomK wrote: >>>>> Hey Martin, >>>>> >>>>> Thanks very much. Appreciate you jumping in on this thread. >>>> Can you provide some more details with respect to which libvirt version >>>> you have installed. I know I've made changes in this space in more >>>> recent versions (not the most recent). I'm no root_squash expert, but I >>>> was the last to change things in the space so that makes me partially >>>> fluent ;-) in NFS/root_squash speak. >>>> >> >> I'm always lost in how do we handle *all* the corner cases that are not >> even used anywhere at all, but care about the conditions we have in the >> code. Especially when it's constantly changing. So thanks for jumping >> in. I only replied because nobody else did and I had only the tiniest >> clue as to what could happen. >>I saw the post, but was heads down somewhere else. Suffice to say trying to swap in root_squash is a painful exercise... John [...]
Possibly Parallel Threads
- Re: [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
- Re: [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
- Re: [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
- Re: [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path
- Re: Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path