Hello, I'm seeing a weird issue with OpenStack and Gluster. I have /var/lib/nova/instances mounted as a glusterfs volume. The owner of /var/lib/nova/instances is nova:nova. When I launch a vm and watch it launching, I see the following: root at c01:/var/lib/nova/instances/instance-00000012# ls -l total 8 -rw-rw---- 1 nova nova 0 Aug 24 14:22 console.log -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml This is correct. Then it changes ownership to libvirt-qemu: root at c01:/var/lib/nova/instances/instance-00000012# ls -l total 22556 -rw-rw---- 1 libvirt-qemu kvm 0 Aug 24 14:22 console.log -rw-r--r-- 1 libvirt-qemu kvm 27262976 Aug 24 14:22 disk -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xm Again, this is correct. But then it changes to root: root at c01:/var/lib/nova/instances/instance-00000012# ls -l total 22556 -rw-rw---- 1 root root 0 Aug 24 14:22 console.log -rw-r--r-- 1 root root 27262976 Aug 24 14:22 disk -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml OpenStack then errors out due to not being able to correctly access the files. If I remove the /var/lib/nova/instances mount and just use the normal filesystem, the root ownership part does not happen. I have successfully had Gluster working with OpenStack in this way on a different installation, so I'm not sure why I'm seeing this issue now. Any ideas? Thanks, Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120824/29ad1a0a/attachment.html>
I figured out how to work around this but I'm not sure of the exact reason why it happened. The Gluster bricks I was using were LVM LV partitions that sat on top of a software RAID1. I broke the software RAID and dedicated one hard drive to LVM in order for OpenStack to use it for nova-volumes. I then used the other drive strictly for the gluster brick. This removed mdadm and LVM out of the equation and the problem went away. I then tried with just LVM and still did not see this problem. Unfortunately I don't have enough hardware at the moment to create another RAID1 mirror, so I can't single that out. I will try when I get a chance -- unless anyone else knows if it would cause a problem? Or maybe it is the mdamd+LVM combination? Thanks, Joe On Fri, Aug 24, 2012 at 3:17 PM, Joe Topjian <joe at topjian.net> wrote:> Hello, > > I'm seeing a weird issue with OpenStack and Gluster. > > I have /var/lib/nova/instances mounted as a glusterfs volume. The owner of > /var/lib/nova/instances is nova:nova. > > When I launch a vm and watch it launching, I see the following: > > root at c01:/var/lib/nova/instances/instance-00000012# ls -l > total 8 > -rw-rw---- 1 nova nova 0 Aug 24 14:22 console.log > -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml > > This is correct. > > Then it changes ownership to libvirt-qemu: > > root at c01:/var/lib/nova/instances/instance-00000012# ls -l > total 22556 > -rw-rw---- 1 libvirt-qemu kvm 0 Aug 24 14:22 console.log > -rw-r--r-- 1 libvirt-qemu kvm 27262976 Aug 24 14:22 disk > -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xm > > Again, this is correct. > > But then it changes to root: > > root at c01:/var/lib/nova/instances/instance-00000012# ls -l > total 22556 > -rw-rw---- 1 root root 0 Aug 24 14:22 console.log > -rw-r--r-- 1 root root 27262976 Aug 24 14:22 disk > -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml > > OpenStack then errors out due to not being able to correctly access the > files. > > If I remove the /var/lib/nova/instances mount and just use the normal > filesystem, the root ownership part does not happen. > > I have successfully had Gluster working with OpenStack in this way on a > different installation, so I'm not sure why I'm seeing this issue now. > > Any ideas? > > Thanks, > Joe >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120824/0ca63721/attachment.html>
Is it possible a gluster volume rebalance (or remove brick) was in progress in the background? If so you might have hit http://review.gluster.org/3861. If not, can you please file a bug with the client logs (of all the machines where 'disk' file was possibly getting modified? Thanks, Avati On Fri, Aug 24, 2012 at 2:17 PM, Joe Topjian <joe at topjian.net> wrote:> Hello, > > I'm seeing a weird issue with OpenStack and Gluster. > > I have /var/lib/nova/instances mounted as a glusterfs volume. The owner of > /var/lib/nova/instances is nova:nova. > > When I launch a vm and watch it launching, I see the following: > > root at c01:/var/lib/nova/instances/instance-00000012# ls -l > total 8 > -rw-rw---- 1 nova nova 0 Aug 24 14:22 console.log > -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml > > This is correct. > > Then it changes ownership to libvirt-qemu: > > root at c01:/var/lib/nova/instances/instance-00000012# ls -l > total 22556 > -rw-rw---- 1 libvirt-qemu kvm 0 Aug 24 14:22 console.log > -rw-r--r-- 1 libvirt-qemu kvm 27262976 Aug 24 14:22 disk > -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xm > > Again, this is correct. > > But then it changes to root: > > root at c01:/var/lib/nova/instances/instance-00000012# ls -l > total 22556 > -rw-rw---- 1 root root 0 Aug 24 14:22 console.log > -rw-r--r-- 1 root root 27262976 Aug 24 14:22 disk > -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml > > OpenStack then errors out due to not being able to correctly access the > files. > > If I remove the /var/lib/nova/instances mount and just use the normal > filesystem, the root ownership part does not happen. > > I have successfully had Gluster working with OpenStack in this way on a > different installation, so I'm not sure why I'm seeing this issue now. > > Any ideas? > > Thanks, > Joe > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120828/77bc97ac/attachment.html>