A little off topic, but thought someone here would know off the top of their head... I'm setting up a new glusterfs system. I have a hardware raid6 configured as 10+2 with a 256K chunk size (so the partition being seen in Redhat is a single LUN). I get a warning when I initialize it with XFS: [root at lab-ads1 ~]# mkfs.xfs -f -isize=512 -d su=256k,sw=10 /dev/mapper/c0 mkfs.xfs: Specified data stripe width 5120 is not the same as the volume stripe width 2048 The mkfs seems to work fine (and hence why I characterize the message as a warning). I've searched around and see this type of message all over the place. But there's never a hint if this is an expected message for such a configuration which can be safely ignored or if I'm severely misinterpreting the man page and performance will suffer as these partitions fill up with data. Thanks in advance, Bob
Hello, I'm getting similar warning and I haven't found out if this is "real deal". There are posts available at https://wiki.xkyle.com/XFS_Block_Sizes and at http://blog.tsunanet.net/2011/08/mkfsxfs-raid10-optimal-performance.html that explain how to calculate correct values. If I get everything correct, meaning that XFS doesn't complain anything, then it doesn't match up at all with stripe size specified in raid array. So my best guess is that in my case XFS isn't getting correct information from LSI raid controller. I haven't done any excessive testing comparing different settings so I may be completely wrong too :) -samuli "Ellison, Bob" <bob.ellison at ccur.com> kirjoitti 4.12.2013 kello 18.34:> A little off topic, but thought someone here would know off the top of their head... > > I'm setting up a new glusterfs system. I have a hardware raid6 configured as 10+2 with a 256K chunk size (so the partition being seen in Redhat is a single LUN). I get a warning when I initialize it with XFS: > > [root at lab-ads1 ~]# mkfs.xfs -f -isize=512 -d su=256k,sw=10 /dev/mapper/c0 > mkfs.xfs: Specified data stripe width 5120 is not the same as the volume stripe width 2048 > > The mkfs seems to work fine (and hence why I characterize the message as a warning). > > I've searched around and see this type of message all over the place. But there's never a hint if this is an expected message for such a configuration which can be safely ignored or if I'm severely misinterpreting the man page and performance will suffer as these partitions fill up with data. > > Thanks in advance, > Bob > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users
On 12/04/2013 11:34 AM, Ellison, Bob wrote:> A little off topic, but thought someone here would know off the top of their head... > > I'm setting up a new glusterfs system. I have a hardware raid6 configured as 10+2 with a 256K chunk size (so the partition being seen in Redhat is a single LUN). I get a warning when I initialize it with XFS: > > [root at lab-ads1 ~]# mkfs.xfs -f -isize=512 -d su=256k,sw=10 /dev/mapper/c0 > mkfs.xfs: Specified data stripe width 5120 is not the same as the volume stripe width 2048 >You could check out 'blkid -i /dev/mapper/c0' to see what characteristics the dm device reports (and/or for the underlying block device, if accessible) and whether it corresponds to how the volume has been configured. Assuming a recent enough version, mkfs.xfs correlates minimum I/O size to stripe unit and optimal I/O size to stripe width. I'm not sure what the device mapper device represents here, but you'll probably want to make sure it is aligned as well. Brian> The mkfs seems to work fine (and hence why I characterize the message as a warning). > > I've searched around and see this type of message all over the place. But there's never a hint if this is an expected message for such a configuration which can be safely ignored or if I'm severely misinterpreting the man page and performance will suffer as these partitions fill up with data. > > Thanks in advance, > Bob > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users >