Jacob Yundt
2013-Apr-28 17:44 UTC
[Gluster-users] KVM guest I/O errors with xfs backed gluster volumes
Does anyone have experience using gluster storage pools (backed by xfs filesystems) with KVM? I'm able to add a VirtIO disk without issue, however, when I try to create (or mount) an ext4 filesystem from the KVM guest, I get errors. If I use a gluster volume that is _not_ backed by xfs (e.g. ext4)) everything works as expected. I'm able to reproduce on CentOS6.4 x86_64 (2.6.32-358.6.1.el6.x86_64) with the following gluster versions: *) gluster 3.3.1 *) gluster 3.3.1-13 (built from EPEL SRPMs) *) gluster 3.4.0alpha3 I've poked around RH bugzilla, but haven't been able to find anything so far. For reference: "cache mode" on the VirtIO disks is set to "none" (to allow for safe live migration). Any ideas/suggestions? -Jacob
Vijay Bellur
2013-Apr-29 06:14 UTC
[Gluster-users] KVM guest I/O errors with xfs backed gluster volumes
On 04/28/2013 11:14 PM, Jacob Yundt wrote:> Does anyone have experience using gluster storage pools (backed by xfs > filesystems) with KVM? I'm able to add a VirtIO disk without issue, > however, when I try to create (or mount) an ext4 filesystem from the > KVM guest, I get errors. If I use a gluster volume that is _not_ > backed by xfs (e.g. ext4)) everything works as expected. I'm able to > reproduce on CentOS6.4 x86_64 (2.6.32-358.6.1.el6.x86_64) with the > following gluster versions: > > *) gluster 3.3.1 > *) gluster 3.3.1-13 (built from EPEL SRPMs) > *) gluster 3.4.0alpha3What kind of a volume are you using? Do you observe any failures in the gluster client log files? Regards, Vijay
Bharata B Rao
2013-Nov-06 03:56 UTC
[Gluster-users] KVM guest I/O errors with xfs backed gluster volumes
My below mail didn't make it to the list, hence resending... On Tue, Nov 5, 2013 at 8:04 PM, Bharata B Rao <bharata at linux.vnet.ibm.com>wrote:> On Wed, Oct 30, 2013 at 11:26:48PM +0530, Bharata B Rao wrote: > > On Tue, Oct 29, 2013 at 1:21 PM, Anand Avati <avati at gluster.org> wrote: > > > > > Looks like what is happening is that qemu performs ioctls() on the > backend > > > to query logical_block_size (for direct IO alignment). That works on > XFS, > > > but fails on FUSE (hence qemu ends up performing IO with default 512 > > > alignment rather than 4k). > > > > > > Looks like this might be something we can enhance gluster driver in > qemu. > > > Note that glusterfs does not have an ioctl() FOP, but we could probably > > > wire up a virtual xattr call for this purpose. > > > > > > Copying Bharata to check if he has other solutions in mind. > > > > > > > I see alignment issues and subsequent QEMU failure (pread() failing with > > EINVAL) when I use a file from XFS mount point (with sectsz=4k) as a > virtio > > disk with cache=none QEMU option. However this failure isn't seen when I > > have sectsz=512. And all this is w/o gluster. So there seems to be some > > alignment issues even w/o gluster, I will debug more and get back. > > I gather that QEMU block layer and SeaBIOS don't yet support 4k sectors. > So this is not a QEMU-GlusterFS specific issue. > > You could either not use cache=none option which results in O_DIRECT > or use the below something like below which explicitly sets the sector size > and min io size for the guest. > > -drive file=/mnt/xfs.img,if=none,cache=none,format=raw,id=mydisk -device > virtio-blk,drive=mydisk,logical_block_size=4096,physical_block_size=4096,min_io_size=4096 > > Ref: https://bugzilla.redhat.com/show_bug.cgi?id=997839 > > Regards, > Bharata. > >-- http://raobharata.wordpress.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131106/02664a98/attachment.html>