Hi, xfs is supposed to detect the layout of a md-RAID devices when creating the file system, but it doesn?t seem to do that: # cat /proc/mdstat Personalities : [raid1] md10 : active raid1 sde[1] sdd[0] 499976512 blocks super 1.2 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk # mkfs.xfs /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=120799568, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=58984, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 # mkfs.xfs -f -d su=64m,sw=2 /dev/md10p2 meta-data=/dev/md10p2 isize=512 agcount=16, agsize=7553024 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=120799568, imaxpct=25 = sunit=16384 swidth=32768 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=58984, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 The 64MB chunk size was picked my mdadm automatically. The device is made from two disks, and xfs either doesn?t figure that out, or it decided to ignore the layout of the underlying RAID. Am I doing something wrong here, or is xfs in Centos somehow different? Do, or must, we always specify the apporpriate values for su and sw or did xfs ignore them because what it picked is better?
On 20 September 2017 at 10:47, hw <hw at gc-24.de> wrote:> > Hi, > > xfs is supposed to detect the layout of a md-RAID devices when creating the > file system, but it doesn?t seem to do that: > > > # cat /proc/mdstat > Personalities : [raid1] > md10 : active raid1 sde[1] sdd[0] > 499976512 blocks super 1.2 [2/2] [UU] > bitmap: 0/4 pages [0KB], 65536KB chunk > > > # mkfs.xfs /dev/md10p2 > meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 > blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 > data = bsize=4096 blocks=120799568, imaxpct=25 > = sunit=0 swidth=0 blks > naming =version 2 bsize=4096 ascii-ci=0 ftype=1 > log =internal log bsize=4096 blocks=58984, version=2 > = sectsz=512 sunit=0 blks, lazy-count=1 > realtime =none extsz=4096 blocks=0, rtextents=0 > > > # mkfs.xfs -f -d su=64m,sw=2 /dev/md10p2 > meta-data=/dev/md10p2 isize=512 agcount=16, agsize=7553024 > blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 > data = bsize=4096 blocks=120799568, imaxpct=25 > = sunit=16384 swidth=32768 blks > naming =version 2 bsize=4096 ascii-ci=0 ftype=1 > log =internal log bsize=4096 blocks=58984, version=2 > = sectsz=512 sunit=8 blks, lazy-count=1 > realtime =none extsz=4096 blocks=0, rtextents=0 > > > The 64MB chunk size was picked my mdadm automatically. The device is made > from two disks, and xfs either doesn?t figure that out, or it decided to > ignore the layout of the underlying RAID. > > Am I doing something wrong here, or is xfs in Centos somehow different? > Do, or must, we always specify the apporpriate values for su and sw or > did xfs ignore them because what it picked is better? >I don't know enough to answer, but I do have a question.. what were you expecting xfs to do (and what filesystems do that?) Thanks -- Stephen J Smoogen.
Once upon a time, hw <hw at gc-24.de> said:> xfs is supposed to detect the layout of a md-RAID devices when creating the > file system, but it doesn?t seem to do that: > > > # cat /proc/mdstat > Personalities : [raid1] > md10 : active raid1 sde[1] sdd[0] > 499976512 blocks super 1.2 [2/2] [UU] > bitmap: 0/4 pages [0KB], 65536KB chunkRAID 1 has no "layout" (for RAID, that usually refers to striping in RAID levels 0/5/6), so there's nothing for a filesystem to detect or optimize for. The chunk size above is for the md-RAID write-intent bitmap; that's not exposed information (for any RAID system that I'm aware of, software or hardware) or something that filesystems can optimize for. -- Chris Adams <linux at cmadams.net>
Stephen John Smoogen wrote:> On 20 September 2017 at 10:47, hw <hw at gc-24.de> wrote: >> >> Hi, >> >> xfs is supposed to detect the layout of a md-RAID devices when creating the >> file system, but it doesn?t seem to do that: >> >> >> # cat /proc/mdstat >> Personalities : [raid1] >> md10 : active raid1 sde[1] sdd[0] >> 499976512 blocks super 1.2 [2/2] [UU] >> bitmap: 0/4 pages [0KB], 65536KB chunk >> >> >> # mkfs.xfs /dev/md10p2 >> meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 >> blks >> = sectsz=512 attr=2, projid32bit=1 >> = crc=1 finobt=0, sparse=0 >> data = bsize=4096 blocks=120799568, imaxpct=25 >> = sunit=0 swidth=0 blks >> naming =version 2 bsize=4096 ascii-ci=0 ftype=1 >> log =internal log bsize=4096 blocks=58984, version=2 >> = sectsz=512 sunit=0 blks, lazy-count=1 >> realtime =none extsz=4096 blocks=0, rtextents=0 >> >> >> # mkfs.xfs -f -d su=64m,sw=2 /dev/md10p2 >> meta-data=/dev/md10p2 isize=512 agcount=16, agsize=7553024 >> blks >> = sectsz=512 attr=2, projid32bit=1 >> = crc=1 finobt=0, sparse=0 >> data = bsize=4096 blocks=120799568, imaxpct=25 >> = sunit=16384 swidth=32768 blks >> naming =version 2 bsize=4096 ascii-ci=0 ftype=1 >> log =internal log bsize=4096 blocks=58984, version=2 >> = sectsz=512 sunit=8 blks, lazy-count=1 >> realtime =none extsz=4096 blocks=0, rtextents=0 >> >> >> The 64MB chunk size was picked my mdadm automatically. The device is made >> from two disks, and xfs either doesn?t figure that out, or it decided to >> ignore the layout of the underlying RAID. >> >> Am I doing something wrong here, or is xfs in Centos somehow different? >> Do, or must, we always specify the apporpriate values for su and sw or >> did xfs ignore them because what it picked is better? >> > > I don't know enough to answer, but I do have a question.. what were > you expecting xfs to do (and what filesystems do that?) ThanksI was expecting that the correct stripe size and stripe width would be used.
Chris Adams wrote:> Once upon a time, hw <hw at gc-24.de> said: >> xfs is supposed to detect the layout of a md-RAID devices when creating the >> file system, but it doesn?t seem to do that: >> >> >> # cat /proc/mdstat >> Personalities : [raid1] >> md10 : active raid1 sde[1] sdd[0] >> 499976512 blocks super 1.2 [2/2] [UU] >> bitmap: 0/4 pages [0KB], 65536KB chunk > > RAID 1 has no "layout" (for RAID, that usually refers to striping in > RAID levels 0/5/6), so there's nothing for a filesystem to detect or > optimize for.Are you saying there is no difference between a RAID1 and a non-raid device as far as xfs is concerned? What if you use hardware RAID? When you look at [1], it tells you to specify su and sw with hardware RAID and says it detects everything automatically with md-RAID. It doesn?t have an example with RAID1 but one with RAID10 --- however, why would that make a difference? Aren?t there stripes in a RAID1? If you read from both disks in a RAID1 simultaneously, you have to wait out the latency of both disks before you get the data at full speed, and it might be better to use stripes with them as well and read multiple parts of the data at the same time. [1]: http://xfs.org/index.php/XFS_FAQ#Q:_How_to_calculate_the_correct_sunit.2Cswidth_values_for_optimal_performance > The chunk size above is for the md-RAID write-intent bitmap; that's not > exposed information (for any RAID system that I'm aware of, software or > hardware) or something that filesystems can optimize for. Oh, ok. How do you know what stripe size was picked by mdadm? It seemd a good idea to go with defaults as far as possible.