Lin Shen (lshen)
2007-Feb-08 15:01 UTC
[Lustre-discuss] No space left while running createmany
I created a lustre file system with MDT on a 32MB partition and one OST on a 480MB partition and mounted the file system on two nodes. While running the createmany test program on the client node, it always stops at 10000 files with a No space left error. But the strange thing is that df shows both partition have lt of free space. Lin
Sounds like you''re running outta inodes Do: tune2fs -l <raw_device> to see how many inodes the thing supports -----Original Message----- From: lustre-discuss-bounces@clusterfs.com [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf Of Lin Shen (lshen) Sent: Thursday, February 08, 2007 3:01 PM To: lustre-discuss@clusterfs.com Subject: [Lustre-discuss] No space left while running createmany I created a lustre file system with MDT on a 32MB partition and one OST on a 480MB partition and mounted the file system on two nodes. While running the createmany test program on the client node, it always stops at 10000 files with a No space left error. But the strange thing is that df shows both partition have lt of free space. Lin _______________________________________________ Lustre-discuss mailing list Lustre-discuss@clusterfs.com https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
Lin Shen (lshen)
2007-Feb-08 18:14 UTC
[Lustre-discuss] No space left while running createmany
tune2fs on the MDT partition says that there are still free inodes. In general, how the default number of inodes is calculated for a lustre file system? I guess it can be set by "mkfsoptions", but not through tunefs.lustre though. [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 (28-Feb-2004) Filesystem volume name: lustrefs-MDT0000 Last mounted on: <not available> Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal resize_inode dir_index filetype needs_reco very sparse_super large_file Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 10032 Block count: 10032 Reserved block count: 501 Free blocks: 7736 Free inodes: 10019 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 2 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 2 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 10032 Inode blocks per group: 1254 Filesystem created: Wed Feb 7 15:04:21 2007 Last mount time: Wed Feb 7 15:05:54 2007 Last write time: Wed Feb 7 15:05:54 2007 Mount count: 3 Maximum mount count: 37 Last checked: Wed Feb 7 15:04:21 2007 Check interval: 15552000 (6 months) Next check after: Mon Aug 6 16:04:21 2007 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 512 Journal inode: 8 Default directory hash: tea Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf Journal backup: inode blocks> -----Original Message----- > From: Gary Every [mailto:gevery@vcommerce.com] > Sent: Thursday, February 08, 2007 2:21 PM > To: Lin Shen (lshen); lustre-discuss@clusterfs.com > Subject: RE: [Lustre-discuss] No space left while running createmany > > Sounds like you''re running outta inodes > > Do: tune2fs -l <raw_device> to see how many inodes the thing supports > > > > -----Original Message----- > From: lustre-discuss-bounces@clusterfs.com > [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf Of Lin Shen > (lshen) > Sent: Thursday, February 08, 2007 3:01 PM > To: lustre-discuss@clusterfs.com > Subject: [Lustre-discuss] No space left while running createmany > > I created a lustre file system with MDT on a 32MB partition > and one OST on a 480MB partition and mounted the file system > on two nodes. While running the createmany test program on > the client node, it always stops at 10000 files with a No > space left error. But the strange thing is that df shows both > partition have lt of free space. > > Lin > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >
Hi, I had a look at mke2fs code in e2fsprogs-1.39(since lustre eventually uses ext3 to create the filesystem) and this is how lustre would create the default number of inodes. For small filesystems(as is your case), it creates a inode for every 4096 bytes of space on the file system. This can also be specified by the -i option to mke2fs. So in your case, with a 32 MB partition you would have 32MB/4096 = 8192 inodes by default. So using a "--mkfsoptions -i 2048" option to mkfs.lustre would give you 16384 inodes enough to create more than 10000 files. For large filesytems, an inode is created for every 1Mb of filesystem space and for even for larger filesystems an inode is created for every 4MB of filesystem space. Yes, tune2fs cannot change the number of inodes in your filesystem. This option can only be set while formatting the filesystem. Regards, Kalpak. On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote:> tune2fs on the MDT partition says that there are still free inodes. In > general, how the default number of inodes is calculated for a lustre > file system? I guess it can be set by "mkfsoptions", but not through > tunefs.lustre though. > > > [root@cfs4 ~]# tune2fs -l /dev/hda10 | more > tune2fs 1.35 (28-Feb-2004) > Filesystem volume name: lustrefs-MDT0000 > Last mounted on: <not available> > Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed > Filesystem magic number: 0xEF53 > Filesystem revision #: 1 (dynamic) > Filesystem features: has_journal resize_inode dir_index filetype > needs_reco > very sparse_super large_file > Default mount options: (none) > Filesystem state: clean > Errors behavior: Continue > Filesystem OS type: Linux > Inode count: 10032 > Block count: 10032 > Reserved block count: 501 > Free blocks: 7736 > Free inodes: 10019 > First block: 0 > Block size: 4096 > Fragment size: 4096 > Reserved GDT blocks: 2 > Block size: 4096 > Fragment size: 4096 > Reserved GDT blocks: 2 > Blocks per group: 32768 > Fragments per group: 32768 > Inodes per group: 10032 > Inode blocks per group: 1254 > Filesystem created: Wed Feb 7 15:04:21 2007 > Last mount time: Wed Feb 7 15:05:54 2007 > Last write time: Wed Feb 7 15:05:54 2007 > Mount count: 3 > Maximum mount count: 37 > Last checked: Wed Feb 7 15:04:21 2007 > Check interval: 15552000 (6 months) > Next check after: Mon Aug 6 16:04:21 2007 > Reserved blocks uid: 0 (user root) > Reserved blocks gid: 0 (group root) > First inode: 11 > Inode size: 512 > Journal inode: 8 > Default directory hash: tea > Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf > Journal backup: inode blocks > > > > -----Original Message----- > > From: Gary Every [mailto:gevery@vcommerce.com] > > Sent: Thursday, February 08, 2007 2:21 PM > > To: Lin Shen (lshen); lustre-discuss@clusterfs.com > > Subject: RE: [Lustre-discuss] No space left while running createmany > > > > Sounds like you''re running outta inodes > > > > Do: tune2fs -l <raw_device> to see how many inodes the thing supports > > > > > > > > -----Original Message----- > > From: lustre-discuss-bounces@clusterfs.com > > [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf Of Lin Shen > > (lshen) > > Sent: Thursday, February 08, 2007 3:01 PM > > To: lustre-discuss@clusterfs.com > > Subject: [Lustre-discuss] No space left while running createmany > > > > I created a lustre file system with MDT on a 32MB partition > > and one OST on a 480MB partition and mounted the file system > > on two nodes. While running the createmany test program on > > the client node, it always stops at 10000 files with a No > > space left error. But the strange thing is that df shows both > > partition have lt of free space. > > > > Lin > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss@clusterfs.com > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
Andreas Dilger
2007-Feb-09 00:14 UTC
[Lustre-discuss] No space left while running createmany
On Feb 08, 2007 14:00 -0800, Lin Shen (lshen) wrote:> I created a lustre file system with MDT on a 32MB partition and one OST > on a 480MB partition and mounted the file system on two nodes. While > running the createmany test program on the client node, it always stops > at 10000 files with a No space left error. But the strange thing is that > df shows both partition have lt of free space.Use "lfs df" and "lfs df -i" to get per-OST and per-MDS usage stats. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
Lin Shen (lshen)
2007-Feb-09 17:49 UTC
[Lustre-discuss] No space left while running createmany
Hi Kalpak, What you described makes sense. But when I tried it, it doesn''t work as expected. With "--mkfsoptions="-i 2048", I ended up with even fewer number of inodes and less free space (at least according to lfs df -i). So I tried using 1024 as the bytes-per-inode ratio, the inodes number got decremented even more. And by making the ratio bigger (8192) doesn''t generate more inodes than the default (4096). I also tried "--mkfsoption -N numer-of-inodes", but "ifs df -i" doesn''t report that number. When I ran bonnie++ on it, lustre (1,6beta) crashed with a kernel error. Lin> -----Original Message----- > From: Kalpak Shah [mailto:kalpak@clusterfs.com] > Sent: Thursday, February 08, 2007 11:10 PM > To: Lin Shen (lshen) > Cc: Gary Every; lustre-discuss@clusterfs.com > Subject: RE: [Lustre-discuss] No space left while running createmany > > Hi, > > I had a look at mke2fs code in e2fsprogs-1.39(since lustre > eventually uses ext3 to create the filesystem) and this is > how lustre would create the default number of inodes. > > For small filesystems(as is your case), it creates a inode for every > 4096 bytes of space on the file system. This can also be > specified by the -i option to mke2fs. So in your case, with a > 32 MB partition you would have 32MB/4096 = 8192 inodes by > default. So using a "--mkfsoptions -i 2048" option to > mkfs.lustre would give you 16384 inodes enough to create more > than 10000 files. > > For large filesytems, an inode is created for every 1Mb of > filesystem space and for even for larger filesystems an inode > is created for every 4MB of filesystem space. > > Yes, tune2fs cannot change the number of inodes in your > filesystem. This option can only be set while formatting the > filesystem. > > Regards, > Kalpak. > > > On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: > > tune2fs on the MDT partition says that there are still free > inodes. In > > general, how the default number of inodes is calculated for > a lustre > > file system? I guess it can be set by "mkfsoptions", but > not through > > tunefs.lustre though. > > > > > > [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 > (28-Feb-2004) > > Filesystem volume name: lustrefs-MDT0000 > > Last mounted on: <not available> > > Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed > > Filesystem magic number: 0xEF53 > > Filesystem revision #: 1 (dynamic) > > Filesystem features: has_journal resize_inode > dir_index filetype > > needs_reco > > very sparse_super large_file > > Default mount options: (none) > > Filesystem state: clean > > Errors behavior: Continue > > Filesystem OS type: Linux > > Inode count: 10032 > > Block count: 10032 > > Reserved block count: 501 > > Free blocks: 7736 > > Free inodes: 10019 > > First block: 0 > > Block size: 4096 > > Fragment size: 4096 > > Reserved GDT blocks: 2 > > Block size: 4096 > > Fragment size: 4096 > > Reserved GDT blocks: 2 > > Blocks per group: 32768 > > Fragments per group: 32768 > > Inodes per group: 10032 > > Inode blocks per group: 1254 > > Filesystem created: Wed Feb 7 15:04:21 2007 > > Last mount time: Wed Feb 7 15:05:54 2007 > > Last write time: Wed Feb 7 15:05:54 2007 > > Mount count: 3 > > Maximum mount count: 37 > > Last checked: Wed Feb 7 15:04:21 2007 > > Check interval: 15552000 (6 months) > > Next check after: Mon Aug 6 16:04:21 2007 > > Reserved blocks uid: 0 (user root) > > Reserved blocks gid: 0 (group root) > > First inode: 11 > > Inode size: 512 > > Journal inode: 8 > > Default directory hash: tea > > Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf > > Journal backup: inode blocks > > > > > > > -----Original Message----- > > > From: Gary Every [mailto:gevery@vcommerce.com] > > > Sent: Thursday, February 08, 2007 2:21 PM > > > To: Lin Shen (lshen); lustre-discuss@clusterfs.com > > > Subject: RE: [Lustre-discuss] No space left while running > createmany > > > > > > Sounds like you''re running outta inodes > > > > > > Do: tune2fs -l <raw_device> to see how many inodes the thing > > > supports > > > > > > > > > > > > -----Original Message----- > > > From: lustre-discuss-bounces@clusterfs.com > > > [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf > Of Lin Shen > > > (lshen) > > > Sent: Thursday, February 08, 2007 3:01 PM > > > To: lustre-discuss@clusterfs.com > > > Subject: [Lustre-discuss] No space left while running createmany > > > > > > I created a lustre file system with MDT on a 32MB > partition and one > > > OST on a 480MB partition and mounted the file system on > two nodes. > > > While running the createmany test program on the client node, it > > > always stops at 10000 files with a No space left error. But the > > > strange thing is that df shows both partition have lt of > free space. > > > > > > Lin > > > > > > _______________________________________________ > > > Lustre-discuss mailing list > > > Lustre-discuss@clusterfs.com > > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss@clusterfs.com > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >
Lin Shen (lshen)
2007-Feb-12 15:46 UTC
[Lustre-discuss] No space left while running createmany
Just to show that the --mkfsoptions="-i 2048" is not working as expected or maybe I''m not doing it right. First, I did a mkfs on the mdt partition with the default. From the command outputs can tell that it''s using 4096 as you described. And "lfs df -i" says that there are 7743 inodes created. So far so good. Then, I did another mkfs on the same partition, and this time I set the bytes-per-node to 2048. Supposely, the number of inodes should double. But "lfs df -i" says only 6489 inode are created. It actually created fewer inodes! [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat /dev/hda9 Permanent disk data: Target: lustrefs-MDTffff Index: unassigned Lustre FS: lustrefs Mount type: ldiskfs Flags: 0x75 (MDT MGS needs_index first_time update ) Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr Parameters: device size = 39MB formatting backing filesystem ldiskfs on /dev/hda9 target name lustrefs-MDTffff 4k blocks 0 options -i 4096 -I 512 -q -O dir_index -F mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 4096 -I 512 -q -O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata [root@cfs6 ~]# lfs df -i UUID Inodes IUsed IFree IUse% Mounted on lustrefs-MDT0000_UUID 7743 25 7718 0 /mnt/lustre/bonnie[MDT :0] lustrefs-OST0000_UUID 106864 57 106807 0 /mnt/lustre/bonnie[OST :0] filesystem summary: 7743 25 7718 0 /mnt/lustre/bonnie [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048" --mdt --mgs --reformat /dev/hda9 Permanent disk data: Target: lustrefs-MDTffff Index: unassigned Lustre FS: lustrefs Mount type: ldiskfs Flags: 0x75 (MDT MGS needs_index first_time update ) Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr Parameters: device size = 39MB formatting backing filesystem ldiskfs on /dev/hda9 target name lustrefs-MDTffff 4k blocks 0 options -i 2048 -I 512 -q -O dir_index -F mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 -I 512 -q -O dir_index -F /dev/hda9 Writing CONFIGS/mountdata [root@cfs6 ~]# lfs df -i UUID Inodes IUsed IFree IUse% Mounted on lustrefs-MDT0000_UUID 6489 25 6464 0 /mnt/lustre/bonnie[MDT:0] lustrefs-OST0000_UUID 106864 57 106807 0 /mnt/lustre/bonnie[OST:0] filesystem summary: 6489 25 6464 0 /mnt/lustre/bonnie> -----Original Message----- > From: Kalpak Shah [mailto:kalpak@clusterfs.com] > Sent: Thursday, February 08, 2007 11:10 PM > To: Lin Shen (lshen) > Cc: Gary Every; lustre-discuss@clusterfs.com > Subject: RE: [Lustre-discuss] No space left while running createmany > > Hi, > > I had a look at mke2fs code in e2fsprogs-1.39(since lustre eventually > uses ext3 to create the filesystem) and this is how lustre would > create the default number of inodes. > > For small filesystems(as is your case), it creates a inode for every > 4096 bytes of space on the file system. This can also be specified by > the -i option to mke2fs. So in your case, with a > 32 MB partition you would have 32MB/4096 = 8192 inodes by default. So > using a "--mkfsoptions -i 2048" option to mkfs.lustre would give you > 16384 inodes enough to create more than 10000 files. > > For large filesytems, an inode is created for every 1Mb of filesystem > space and for even for larger filesystems an inode is created for > every 4MB of filesystem space. > > Yes, tune2fs cannot change the number of inodes in your filesystem. > This option can only be set while formatting the filesystem. > > Regards, > Kalpak. > > > On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: > > tune2fs on the MDT partition says that there are still free > inodes. In > > general, how the default number of inodes is calculated for > a lustre > > file system? I guess it can be set by "mkfsoptions", but > not through > > tunefs.lustre though. > > > > > > [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 > (28-Feb-2004) > > Filesystem volume name: lustrefs-MDT0000 > > Last mounted on: <not available> > > Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed > > Filesystem magic number: 0xEF53 > > Filesystem revision #: 1 (dynamic) > > Filesystem features: has_journal resize_inode > dir_index filetype > > needs_reco > > very sparse_super large_file > > Default mount options: (none) > > Filesystem state: clean > > Errors behavior: Continue > > Filesystem OS type: Linux > > Inode count: 10032 > > Block count: 10032 > > Reserved block count: 501 > > Free blocks: 7736 > > Free inodes: 10019 > > First block: 0 > > Block size: 4096 > > Fragment size: 4096 > > Reserved GDT blocks: 2 > > Block size: 4096 > > Fragment size: 4096 > > Reserved GDT blocks: 2 > > Blocks per group: 32768 > > Fragments per group: 32768 > > Inodes per group: 10032 > > Inode blocks per group: 1254 > > Filesystem created: Wed Feb 7 15:04:21 2007 > > Last mount time: Wed Feb 7 15:05:54 2007 > > Last write time: Wed Feb 7 15:05:54 2007 > > Mount count: 3 > > Maximum mount count: 37 > > Last checked: Wed Feb 7 15:04:21 2007 > > Check interval: 15552000 (6 months) > > Next check after: Mon Aug 6 16:04:21 2007 > > Reserved blocks uid: 0 (user root) > > Reserved blocks gid: 0 (group root) > > First inode: 11 > > Inode size: 512 > > Journal inode: 8 > > Default directory hash: tea > > Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf > > Journal backup: inode blocks > > > > > > > -----Original Message----- > > > From: Gary Every [mailto:gevery@vcommerce.com] > > > Sent: Thursday, February 08, 2007 2:21 PM > > > To: Lin Shen (lshen); lustre-discuss@clusterfs.com > > > Subject: RE: [Lustre-discuss] No space left while running > createmany > > > > > > Sounds like you''re running outta inodes > > > > > > Do: tune2fs -l <raw_device> to see how many inodes the thing > > > supports > > > > > > > > > > > > -----Original Message----- > > > From: lustre-discuss-bounces@clusterfs.com > > > [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf > Of Lin Shen > > > (lshen) > > > Sent: Thursday, February 08, 2007 3:01 PM > > > To: lustre-discuss@clusterfs.com > > > Subject: [Lustre-discuss] No space left while running createmany > > > > > > I created a lustre file system with MDT on a 32MB > partition and one > > > OST on a 480MB partition and mounted the file system on > two nodes. > > > While running the createmany test program on the client node, it > > > always stops at 10000 files with a No space left error. But the > > > strange thing is that df shows both partition have lt of > free space. > > > > > > Lin > > > > > > _______________________________________________ > > > Lustre-discuss mailing list > > > Lustre-discuss@clusterfs.com > > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss@clusterfs.com > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >
Daniel Leaberry
2007-Feb-12 15:59 UTC
[Lustre-discuss] No space left while running createmany
Lin Shen (lshen) wrote:> > Just to show that the --mkfsoptions="-i 2048" is not working as expected > or maybe I''m not doing it right. > > First, I did a mkfs on the mdt partition with the default. From the > command outputs can tell that it''s using 4096 as you described. And "lfs > df -i" says that there are 7743 inodes created. So far so good. > > Then, I did another mkfs on the same partition, and this time I set the > bytes-per-node to 2048. Supposely, the number of inodes should double. > But "lfs df -i" says only 6489 inode are created. It actually created > fewer inodes! > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat > /dev/hda9 > > Permanent disk data: > Target: lustrefs-MDTffff > Index: unassigned > Lustre FS: lustrefs > Mount type: ldiskfs > Flags: 0x75 > (MDT MGS needs_index first_time update ) Persistent mount > opts: errors=remount-ro,iopen_nopriv,user_xattr > Parameters: > > device size = 39MB > formatting backing filesystem ldiskfs on /dev/hda9 > target name lustrefs-MDTffff > 4k blocks 0 > options -i 4096 -I 512 -q -O dir_index -F > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 4096 -I 512 -q > -O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata > > > [root@cfs6 ~]# lfs df -i > UUID Inodes IUsed IFree IUse% Mounted on > lustrefs-MDT0000_UUID 7743 25 7718 0 > /mnt/lustre/bonnie[MDT > :0] > lustrefs-OST0000_UUID 106864 57 106807 0 > /mnt/lustre/bonnie[OST > :0] > > filesystem summary: 7743 25 7718 0 > /mnt/lustre/bonnie > >I have had no issues with mkfsoptions but I single quote it like this mkfs.lustre --fsname=lustre01 --mdt --mgs --mkfsoptions=''-i 1024'' /dev/sdb> [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048" > --mdt --mgs --reformat /dev/hda9 > > Permanent disk data: > Target: lustrefs-MDTffff > Index: unassigned > Lustre FS: lustrefs > Mount type: ldiskfs > Flags: 0x75 > (MDT MGS needs_index first_time update ) Persistent mount > opts: errors=remount-ro,iopen_nopriv,user_xattr > Parameters: > > device size = 39MB > formatting backing filesystem ldiskfs on /dev/hda9 > target name lustrefs-MDTffff > 4k blocks 0 > options -i 2048 -I 512 -q -O dir_index -F > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 -I 512 -q -O > dir_index -F /dev/hda9 Writing CONFIGS/mountdata > > > [root@cfs6 ~]# lfs df -i > UUID Inodes IUsed IFree IUse% Mounted on > lustrefs-MDT0000_UUID 6489 25 6464 0 > /mnt/lustre/bonnie[MDT:0] > lustrefs-OST0000_UUID 106864 57 106807 0 > /mnt/lustre/bonnie[OST:0] > > filesystem summary: 6489 25 6464 0 > /mnt/lustre/bonnie > > > > >> -----Original Message----- >> From: Kalpak Shah [mailto:kalpak@clusterfs.com] >> Sent: Thursday, February 08, 2007 11:10 PM >> To: Lin Shen (lshen) >> Cc: Gary Every; lustre-discuss@clusterfs.com >> Subject: RE: [Lustre-discuss] No space left while running createmany >> >> Hi, >> >> I had a look at mke2fs code in e2fsprogs-1.39(since lustre eventually >> uses ext3 to create the filesystem) and this is how lustre would >> create the default number of inodes. >> >> For small filesystems(as is your case), it creates a inode for every >> 4096 bytes of space on the file system. This can also be specified by >> the -i option to mke2fs. So in your case, with a >> 32 MB partition you would have 32MB/4096 = 8192 inodes by default. So >> using a "--mkfsoptions -i 2048" option to mkfs.lustre would give you >> 16384 inodes enough to create more than 10000 files. >> >> For large filesytems, an inode is created for every 1Mb of filesystem >> space and for even for larger filesystems an inode is created for >> every 4MB of filesystem space. >> >> Yes, tune2fs cannot change the number of inodes in your filesystem. >> This option can only be set while formatting the filesystem. >> >> Regards, >> Kalpak. >> >> >> On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: >> >>> tune2fs on the MDT partition says that there are still free >>> >> inodes. In >> >>> general, how the default number of inodes is calculated for >>> >> a lustre >> >>> file system? I guess it can be set by "mkfsoptions", but >>> >> not through >> >>> tunefs.lustre though. >>> >>> >>> [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 >>> >> (28-Feb-2004) >> >>> Filesystem volume name: lustrefs-MDT0000 >>> Last mounted on: <not available> >>> Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed >>> Filesystem magic number: 0xEF53 >>> Filesystem revision #: 1 (dynamic) >>> Filesystem features: has_journal resize_inode >>> >> dir_index filetype >> >>> needs_reco >>> very sparse_super large_file >>> Default mount options: (none) >>> Filesystem state: clean >>> Errors behavior: Continue >>> Filesystem OS type: Linux >>> Inode count: 10032 >>> Block count: 10032 >>> Reserved block count: 501 >>> Free blocks: 7736 >>> Free inodes: 10019 >>> First block: 0 >>> Block size: 4096 >>> Fragment size: 4096 >>> Reserved GDT blocks: 2 >>> Block size: 4096 >>> Fragment size: 4096 >>> Reserved GDT blocks: 2 >>> Blocks per group: 32768 >>> Fragments per group: 32768 >>> Inodes per group: 10032 >>> Inode blocks per group: 1254 >>> Filesystem created: Wed Feb 7 15:04:21 2007 >>> Last mount time: Wed Feb 7 15:05:54 2007 >>> Last write time: Wed Feb 7 15:05:54 2007 >>> Mount count: 3 >>> Maximum mount count: 37 >>> Last checked: Wed Feb 7 15:04:21 2007 >>> Check interval: 15552000 (6 months) >>> Next check after: Mon Aug 6 16:04:21 2007 >>> Reserved blocks uid: 0 (user root) >>> Reserved blocks gid: 0 (group root) >>> First inode: 11 >>> Inode size: 512 >>> Journal inode: 8 >>> Default directory hash: tea >>> Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf >>> Journal backup: inode blocks >>> >>> >>> >>>> -----Original Message----- >>>> From: Gary Every [mailto:gevery@vcommerce.com] >>>> Sent: Thursday, February 08, 2007 2:21 PM >>>> To: Lin Shen (lshen); lustre-discuss@clusterfs.com >>>> Subject: RE: [Lustre-discuss] No space left while running >>>> >> createmany >> >>>> Sounds like you''re running outta inodes >>>> >>>> Do: tune2fs -l <raw_device> to see how many inodes the thing >>>> supports >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: lustre-discuss-bounces@clusterfs.com >>>> [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf >>>> >> Of Lin Shen >> >>>> (lshen) >>>> Sent: Thursday, February 08, 2007 3:01 PM >>>> To: lustre-discuss@clusterfs.com >>>> Subject: [Lustre-discuss] No space left while running createmany >>>> >>>> I created a lustre file system with MDT on a 32MB >>>> >> partition and one >> >>>> OST on a 480MB partition and mounted the file system on >>>> >> two nodes. >> >>>> While running the createmany test program on the client node, it >>>> always stops at 10000 files with a No space left error. But the >>>> strange thing is that df shows both partition have lt of >>>> >> free space. >> >>>> Lin >>>> >>>> _______________________________________________ >>>> Lustre-discuss mailing list >>>> Lustre-discuss@clusterfs.com >>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>>> >>>> >>> _______________________________________________ >>> Lustre-discuss mailing list >>> Lustre-discuss@clusterfs.com >>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>> > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > >
Lin Shen (lshen)
2007-Feb-12 16:08 UTC
[Lustre-discuss] No space left while running createmany
I believe both single quote and double quote work the same. I can see the right parameter is being passed to the underline mkfs.ext2 with double quote. Anyways, to be careful, I just tried with single quote and still got the same result. What''s the version you are using? I''m using 1.5.97. Lin> -----Original Message----- > From: Daniel Leaberry [mailto:dleaberry@iarchives.com] > Sent: Monday, February 12, 2007 2:59 PM > To: Lin Shen (lshen) > Cc: lustre-discuss@clusterfs.com > Subject: Re: [Lustre-discuss] No space left while running createmany > > > > > Lin Shen (lshen) wrote: > > > > Just to show that the --mkfsoptions="-i 2048" is not working as > > expected or maybe I''m not doing it right. > > > > First, I did a mkfs on the mdt partition with the default. From the > > command outputs can tell that it''s using 4096 as you described. And > > "lfs df -i" says that there are 7743 inodes created. So far so good. > > > > Then, I did another mkfs on the same partition, and this time I set > > the bytes-per-node to 2048. Supposely, the number of inodes > should double. > > But "lfs df -i" says only 6489 inode are created. It > actually created > > fewer inodes! > > > > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat > > /dev/hda9 > > > > Permanent disk data: > > Target: lustrefs-MDTffff > > Index: unassigned > > Lustre FS: lustrefs > > Mount type: ldiskfs > > Flags: 0x75 > > (MDT MGS needs_index first_time update ) Persistent > > mount > > opts: errors=remount-ro,iopen_nopriv,user_xattr > > Parameters: > > > > device size = 39MB > > formatting backing filesystem ldiskfs on /dev/hda9 > > target name lustrefs-MDTffff > > 4k blocks 0 > > options -i 4096 -I 512 -q -O dir_index -F > > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i > 4096 -I 512 -q > > -O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata > > > > > > [root@cfs6 ~]# lfs df -i > > UUID Inodes IUsed IFree IUse% Mounted on > > lustrefs-MDT0000_UUID 7743 25 7718 0 > > /mnt/lustre/bonnie[MDT > > :0] > > lustrefs-OST0000_UUID 106864 57 106807 0 > > /mnt/lustre/bonnie[OST > > :0] > > > > filesystem summary: 7743 25 7718 0 > > /mnt/lustre/bonnie > > > > > > I have had no issues with mkfsoptions but I single quote it > like this mkfs.lustre --fsname=lustre01 --mdt --mgs > --mkfsoptions=''-i 1024'' /dev/sdb > > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048" > > --mdt --mgs --reformat /dev/hda9 > > > > Permanent disk data: > > Target: lustrefs-MDTffff > > Index: unassigned > > Lustre FS: lustrefs > > Mount type: ldiskfs > > Flags: 0x75 > > (MDT MGS needs_index first_time update ) Persistent > > mount > > opts: errors=remount-ro,iopen_nopriv,user_xattr > > Parameters: > > > > device size = 39MB > > formatting backing filesystem ldiskfs on /dev/hda9 > > target name lustrefs-MDTffff > > 4k blocks 0 > > options -i 2048 -I 512 -q -O dir_index -F > > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 > -I 512 -q > > -O dir_index -F /dev/hda9 Writing CONFIGS/mountdata > > > > > > [root@cfs6 ~]# lfs df -i > > UUID Inodes IUsed IFree IUse% Mounted on > > lustrefs-MDT0000_UUID 6489 25 6464 0 > > /mnt/lustre/bonnie[MDT:0] > > lustrefs-OST0000_UUID 106864 57 106807 0 > > /mnt/lustre/bonnie[OST:0] > > > > filesystem summary: 6489 25 6464 0 > > /mnt/lustre/bonnie > > > > > > > > > >> -----Original Message----- > >> From: Kalpak Shah [mailto:kalpak@clusterfs.com] > >> Sent: Thursday, February 08, 2007 11:10 PM > >> To: Lin Shen (lshen) > >> Cc: Gary Every; lustre-discuss@clusterfs.com > >> Subject: RE: [Lustre-discuss] No space left while running > createmany > >> > >> Hi, > >> > >> I had a look at mke2fs code in e2fsprogs-1.39(since lustre > eventually > >> uses ext3 to create the filesystem) and this is how lustre would > >> create the default number of inodes. > >> > >> For small filesystems(as is your case), it creates a inode > for every > >> 4096 bytes of space on the file system. This can also be > specified by > >> the -i option to mke2fs. So in your case, with a > >> 32 MB partition you would have 32MB/4096 = 8192 inodes by > default. So > >> using a "--mkfsoptions -i 2048" option to mkfs.lustre > would give you > >> 16384 inodes enough to create more than 10000 files. > >> > >> For large filesytems, an inode is created for every 1Mb of > filesystem > >> space and for even for larger filesystems an inode is created for > >> every 4MB of filesystem space. > >> > >> Yes, tune2fs cannot change the number of inodes in your > filesystem. > >> This option can only be set while formatting the filesystem. > >> > >> Regards, > >> Kalpak. > >> > >> > >> On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: > >> > >>> tune2fs on the MDT partition says that there are still free > >>> > >> inodes. In > >> > >>> general, how the default number of inodes is calculated for > >>> > >> a lustre > >> > >>> file system? I guess it can be set by "mkfsoptions", but > >>> > >> not through > >> > >>> tunefs.lustre though. > >>> > >>> > >>> [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 > >>> > >> (28-Feb-2004) > >> > >>> Filesystem volume name: lustrefs-MDT0000 > >>> Last mounted on: <not available> > >>> Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed > >>> Filesystem magic number: 0xEF53 > >>> Filesystem revision #: 1 (dynamic) > >>> Filesystem features: has_journal resize_inode > >>> > >> dir_index filetype > >> > >>> needs_reco > >>> very sparse_super large_file > >>> Default mount options: (none) > >>> Filesystem state: clean > >>> Errors behavior: Continue > >>> Filesystem OS type: Linux > >>> Inode count: 10032 > >>> Block count: 10032 > >>> Reserved block count: 501 > >>> Free blocks: 7736 > >>> Free inodes: 10019 > >>> First block: 0 > >>> Block size: 4096 > >>> Fragment size: 4096 > >>> Reserved GDT blocks: 2 > >>> Block size: 4096 > >>> Fragment size: 4096 > >>> Reserved GDT blocks: 2 > >>> Blocks per group: 32768 > >>> Fragments per group: 32768 > >>> Inodes per group: 10032 > >>> Inode blocks per group: 1254 > >>> Filesystem created: Wed Feb 7 15:04:21 2007 > >>> Last mount time: Wed Feb 7 15:05:54 2007 > >>> Last write time: Wed Feb 7 15:05:54 2007 > >>> Mount count: 3 > >>> Maximum mount count: 37 > >>> Last checked: Wed Feb 7 15:04:21 2007 > >>> Check interval: 15552000 (6 months) > >>> Next check after: Mon Aug 6 16:04:21 2007 > >>> Reserved blocks uid: 0 (user root) > >>> Reserved blocks gid: 0 (group root) > >>> First inode: 11 > >>> Inode size: 512 > >>> Journal inode: 8 > >>> Default directory hash: tea > >>> Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf > >>> Journal backup: inode blocks > >>> > >>> > >>> > >>>> -----Original Message----- > >>>> From: Gary Every [mailto:gevery@vcommerce.com] > >>>> Sent: Thursday, February 08, 2007 2:21 PM > >>>> To: Lin Shen (lshen); lustre-discuss@clusterfs.com > >>>> Subject: RE: [Lustre-discuss] No space left while running > >>>> > >> createmany > >> > >>>> Sounds like you''re running outta inodes > >>>> > >>>> Do: tune2fs -l <raw_device> to see how many inodes the thing > >>>> supports > >>>> > >>>> > >>>> > >>>> -----Original Message----- > >>>> From: lustre-discuss-bounces@clusterfs.com > >>>> [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf > >>>> > >> Of Lin Shen > >> > >>>> (lshen) > >>>> Sent: Thursday, February 08, 2007 3:01 PM > >>>> To: lustre-discuss@clusterfs.com > >>>> Subject: [Lustre-discuss] No space left while running createmany > >>>> > >>>> I created a lustre file system with MDT on a 32MB > >>>> > >> partition and one > >> > >>>> OST on a 480MB partition and mounted the file system on > >>>> > >> two nodes. > >> > >>>> While running the createmany test program on the client node, it > >>>> always stops at 10000 files with a No space left error. But the > >>>> strange thing is that df shows both partition have lt of > >>>> > >> free space. > >> > >>>> Lin > >>>> > >>>> _______________________________________________ > >>>> Lustre-discuss mailing list > >>>> Lustre-discuss@clusterfs.com > >>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > >>>> > >>>> > >>> _______________________________________________ > >>> Lustre-discuss mailing list > >>> Lustre-discuss@clusterfs.com > >>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > >>> > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss@clusterfs.com > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > >
Nathaniel Rutman
2007-Feb-12 16:29 UTC
[Lustre-discuss] No space left while running createmany
Your MDS device size is 39MB? You''re not going to have a very big filesystem. It wouldn''t surprise me if the inode calcs look weird at the low end due to fs overhead. In any case, you can just try the "mkfs.ext2" command that mkfs.lustre prints out directly yourself, and play with the values that way. BTW, for 1.5.97 you should make your fsname < 8 chars (see bz11564) Lin Shen (lshen) wrote:> > Just to show that the --mkfsoptions="-i 2048" is not working as expected > or maybe I''m not doing it right. > > First, I did a mkfs on the mdt partition with the default. From the > command outputs can tell that it''s using 4096 as you described. And "lfs > df -i" says that there are 7743 inodes created. So far so good. > > Then, I did another mkfs on the same partition, and this time I set the > bytes-per-node to 2048. Supposely, the number of inodes should double. > But "lfs df -i" says only 6489 inode are created. It actually created > fewer inodes! > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat > /dev/hda9 > > Permanent disk data: > Target: lustrefs-MDTffff > Index: unassigned > Lustre FS: lustrefs > Mount type: ldiskfs > Flags: 0x75 > (MDT MGS needs_index first_time update ) Persistent mount > opts: errors=remount-ro,iopen_nopriv,user_xattr > Parameters: > > device size = 39MB > formatting backing filesystem ldiskfs on /dev/hda9 > target name lustrefs-MDTffff > 4k blocks 0 > options -i 4096 -I 512 -q -O dir_index -F > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 4096 -I 512 -q > -O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata > > > [root@cfs6 ~]# lfs df -i > UUID Inodes IUsed IFree IUse% Mounted on > lustrefs-MDT0000_UUID 7743 25 7718 0 > /mnt/lustre/bonnie[MDT > :0] > lustrefs-OST0000_UUID 106864 57 106807 0 > /mnt/lustre/bonnie[OST > :0] > > filesystem summary: 7743 25 7718 0 > /mnt/lustre/bonnie > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048" > --mdt --mgs --reformat /dev/hda9 > > Permanent disk data: > Target: lustrefs-MDTffff > Index: unassigned > Lustre FS: lustrefs > Mount type: ldiskfs > Flags: 0x75 > (MDT MGS needs_index first_time update ) Persistent mount > opts: errors=remount-ro,iopen_nopriv,user_xattr > Parameters: > > device size = 39MB > formatting backing filesystem ldiskfs on /dev/hda9 > target name lustrefs-MDTffff > 4k blocks 0 > options -i 2048 -I 512 -q -O dir_index -F > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 -I 512 -q -O > dir_index -F /dev/hda9 Writing CONFIGS/mountdata > > > [root@cfs6 ~]# lfs df -i > UUID Inodes IUsed IFree IUse% Mounted on > lustrefs-MDT0000_UUID 6489 25 6464 0 > /mnt/lustre/bonnie[MDT:0] > lustrefs-OST0000_UUID 106864 57 106807 0 > /mnt/lustre/bonnie[OST:0] > > filesystem summary: 6489 25 6464 0 > /mnt/lustre/bonnie > > > > >> -----Original Message----- >> From: Kalpak Shah [mailto:kalpak@clusterfs.com] >> Sent: Thursday, February 08, 2007 11:10 PM >> To: Lin Shen (lshen) >> Cc: Gary Every; lustre-discuss@clusterfs.com >> Subject: RE: [Lustre-discuss] No space left while running createmany >> >> Hi, >> >> I had a look at mke2fs code in e2fsprogs-1.39(since lustre eventually >> uses ext3 to create the filesystem) and this is how lustre would >> create the default number of inodes. >> >> For small filesystems(as is your case), it creates a inode for every >> 4096 bytes of space on the file system. This can also be specified by >> the -i option to mke2fs. So in your case, with a >> 32 MB partition you would have 32MB/4096 = 8192 inodes by default. So >> using a "--mkfsoptions -i 2048" option to mkfs.lustre would give you >> 16384 inodes enough to create more than 10000 files. >> >> For large filesytems, an inode is created for every 1Mb of filesystem >> space and for even for larger filesystems an inode is created for >> every 4MB of filesystem space. >> >> Yes, tune2fs cannot change the number of inodes in your filesystem. >> This option can only be set while formatting the filesystem. >> >> Regards, >> Kalpak. >> >> >> On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: >> >>> tune2fs on the MDT partition says that there are still free >>> >> inodes. In >> >>> general, how the default number of inodes is calculated for >>> >> a lustre >> >>> file system? I guess it can be set by "mkfsoptions", but >>> >> not through >> >>> tunefs.lustre though. >>> >>> >>> [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 >>> >> (28-Feb-2004) >> >>> Filesystem volume name: lustrefs-MDT0000 >>> Last mounted on: <not available> >>> Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed >>> Filesystem magic number: 0xEF53 >>> Filesystem revision #: 1 (dynamic) >>> Filesystem features: has_journal resize_inode >>> >> dir_index filetype >> >>> needs_reco >>> very sparse_super large_file >>> Default mount options: (none) >>> Filesystem state: clean >>> Errors behavior: Continue >>> Filesystem OS type: Linux >>> Inode count: 10032 >>> Block count: 10032 >>> Reserved block count: 501 >>> Free blocks: 7736 >>> Free inodes: 10019 >>> First block: 0 >>> Block size: 4096 >>> Fragment size: 4096 >>> Reserved GDT blocks: 2 >>> Block size: 4096 >>> Fragment size: 4096 >>> Reserved GDT blocks: 2 >>> Blocks per group: 32768 >>> Fragments per group: 32768 >>> Inodes per group: 10032 >>> Inode blocks per group: 1254 >>> Filesystem created: Wed Feb 7 15:04:21 2007 >>> Last mount time: Wed Feb 7 15:05:54 2007 >>> Last write time: Wed Feb 7 15:05:54 2007 >>> Mount count: 3 >>> Maximum mount count: 37 >>> Last checked: Wed Feb 7 15:04:21 2007 >>> Check interval: 15552000 (6 months) >>> Next check after: Mon Aug 6 16:04:21 2007 >>> Reserved blocks uid: 0 (user root) >>> Reserved blocks gid: 0 (group root) >>> First inode: 11 >>> Inode size: 512 >>> Journal inode: 8 >>> Default directory hash: tea >>> Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf >>> Journal backup: inode blocks >>> >>> >>> >>>> -----Original Message----- >>>> From: Gary Every [mailto:gevery@vcommerce.com] >>>> Sent: Thursday, February 08, 2007 2:21 PM >>>> To: Lin Shen (lshen); lustre-discuss@clusterfs.com >>>> Subject: RE: [Lustre-discuss] No space left while running >>>> >> createmany >> >>>> Sounds like you''re running outta inodes >>>> >>>> Do: tune2fs -l <raw_device> to see how many inodes the thing >>>> supports >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: lustre-discuss-bounces@clusterfs.com >>>> [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf >>>> >> Of Lin Shen >> >>>> (lshen) >>>> Sent: Thursday, February 08, 2007 3:01 PM >>>> To: lustre-discuss@clusterfs.com >>>> Subject: [Lustre-discuss] No space left while running createmany >>>> >>>> I created a lustre file system with MDT on a 32MB >>>> >> partition and one >> >>>> OST on a 480MB partition and mounted the file system on >>>> >> two nodes. >> >>>> While running the createmany test program on the client node, it >>>> always stops at 10000 files with a No space left error. But the >>>> strange thing is that df shows both partition have lt of >>>> >> free space. >> >>>> Lin >>>> >>>> _______________________________________________ >>>> Lustre-discuss mailing list >>>> Lustre-discuss@clusterfs.com >>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>>> >>>> >>> _______________________________________________ >>> Lustre-discuss mailing list >>> Lustre-discuss@clusterfs.com >>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>> > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > >
Daniel Leaberry
2007-Feb-12 16:42 UTC
[Lustre-discuss] No space left while running createmany
Lin Shen (lshen) wrote:> I believe both single quote and double quote work the same. I can see > the right parameter is being passed to the underline mkfs.ext2 with > double quote. > > Anyways, to be careful, I just tried with single quote and still got the > same result. > > What''s the version you are using? I''m using 1.5.97. > > Lin >I''m using 1.5.97 as well. Sorry, I don''t know what''s wrong with your setup. All I can say is the mkfsoptions seem to work for me. Daniel> >> -----Original Message----- >> From: Daniel Leaberry [mailto:dleaberry@iarchives.com] >> Sent: Monday, February 12, 2007 2:59 PM >> To: Lin Shen (lshen) >> Cc: lustre-discuss@clusterfs.com >> Subject: Re: [Lustre-discuss] No space left while running createmany >> >> >> >> >> Lin Shen (lshen) wrote: >> >>> >>> Just to show that the --mkfsoptions="-i 2048" is not working as >>> expected or maybe I''m not doing it right. >>> >>> First, I did a mkfs on the mdt partition with the default. From the >>> command outputs can tell that it''s using 4096 as you described. And >>> "lfs df -i" says that there are 7743 inodes created. So far so good. >>> >>> Then, I did another mkfs on the same partition, and this time I set >>> the bytes-per-node to 2048. Supposely, the number of inodes >>> >> should double. >> >>> But "lfs df -i" says only 6489 inode are created. It >>> >> actually created >> >>> fewer inodes! >>> >>> >>> [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat >>> /dev/hda9 >>> >>> Permanent disk data: >>> Target: lustrefs-MDTffff >>> Index: unassigned >>> Lustre FS: lustrefs >>> Mount type: ldiskfs >>> Flags: 0x75 >>> (MDT MGS needs_index first_time update ) Persistent >>> mount >>> opts: errors=remount-ro,iopen_nopriv,user_xattr >>> Parameters: >>> >>> device size = 39MB >>> formatting backing filesystem ldiskfs on /dev/hda9 >>> target name lustrefs-MDTffff >>> 4k blocks 0 >>> options -i 4096 -I 512 -q -O dir_index -F >>> mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i >>> >> 4096 -I 512 -q >> >>> -O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata >>> >>> >>> [root@cfs6 ~]# lfs df -i >>> UUID Inodes IUsed IFree IUse% Mounted on >>> lustrefs-MDT0000_UUID 7743 25 7718 0 >>> /mnt/lustre/bonnie[MDT >>> :0] >>> lustrefs-OST0000_UUID 106864 57 106807 0 >>> /mnt/lustre/bonnie[OST >>> :0] >>> >>> filesystem summary: 7743 25 7718 0 >>> /mnt/lustre/bonnie >>> >>> >>> >> I have had no issues with mkfsoptions but I single quote it >> like this mkfs.lustre --fsname=lustre01 --mdt --mgs >> --mkfsoptions=''-i 1024'' /dev/sdb >> >> >> >>> [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048" >>> --mdt --mgs --reformat /dev/hda9 >>> >>> Permanent disk data: >>> Target: lustrefs-MDTffff >>> Index: unassigned >>> Lustre FS: lustrefs >>> Mount type: ldiskfs >>> Flags: 0x75 >>> (MDT MGS needs_index first_time update ) Persistent >>> mount >>> opts: errors=remount-ro,iopen_nopriv,user_xattr >>> Parameters: >>> >>> device size = 39MB >>> formatting backing filesystem ldiskfs on /dev/hda9 >>> target name lustrefs-MDTffff >>> 4k blocks 0 >>> options -i 2048 -I 512 -q -O dir_index -F >>> mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 >>> >> -I 512 -q >> >>> -O dir_index -F /dev/hda9 Writing CONFIGS/mountdata >>> >>> >>> [root@cfs6 ~]# lfs df -i >>> UUID Inodes IUsed IFree IUse% Mounted on >>> lustrefs-MDT0000_UUID 6489 25 6464 0 >>> /mnt/lustre/bonnie[MDT:0] >>> lustrefs-OST0000_UUID 106864 57 106807 0 >>> /mnt/lustre/bonnie[OST:0] >>> >>> filesystem summary: 6489 25 6464 0 >>> /mnt/lustre/bonnie >>> >>> >>> >>> >>> >>>> -----Original Message----- >>>> From: Kalpak Shah [mailto:kalpak@clusterfs.com] >>>> Sent: Thursday, February 08, 2007 11:10 PM >>>> To: Lin Shen (lshen) >>>> Cc: Gary Every; lustre-discuss@clusterfs.com >>>> Subject: RE: [Lustre-discuss] No space left while running >>>> >> createmany >> >>>> Hi, >>>> >>>> I had a look at mke2fs code in e2fsprogs-1.39(since lustre >>>> >> eventually >> >>>> uses ext3 to create the filesystem) and this is how lustre would >>>> create the default number of inodes. >>>> >>>> For small filesystems(as is your case), it creates a inode >>>> >> for every >> >>>> 4096 bytes of space on the file system. This can also be >>>> >> specified by >> >>>> the -i option to mke2fs. So in your case, with a >>>> 32 MB partition you would have 32MB/4096 = 8192 inodes by >>>> >> default. So >> >>>> using a "--mkfsoptions -i 2048" option to mkfs.lustre >>>> >> would give you >> >>>> 16384 inodes enough to create more than 10000 files. >>>> >>>> For large filesytems, an inode is created for every 1Mb of >>>> >> filesystem >> >>>> space and for even for larger filesystems an inode is created for >>>> every 4MB of filesystem space. >>>> >>>> Yes, tune2fs cannot change the number of inodes in your >>>> >> filesystem. >> >>>> This option can only be set while formatting the filesystem. >>>> >>>> Regards, >>>> Kalpak. >>>> >>>> >>>> On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: >>>> >>>> >>>>> tune2fs on the MDT partition says that there are still free >>>>> >>>>> >>>> inodes. In >>>> >>>> >>>>> general, how the default number of inodes is calculated for >>>>> >>>>> >>>> a lustre >>>> >>>> >>>>> file system? I guess it can be set by "mkfsoptions", but >>>>> >>>>> >>>> not through >>>> >>>> >>>>> tunefs.lustre though. >>>>> >>>>> >>>>> [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 >>>>> >>>>> >>>> (28-Feb-2004) >>>> >>>> >>>>> Filesystem volume name: lustrefs-MDT0000 >>>>> Last mounted on: <not available> >>>>> Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed >>>>> Filesystem magic number: 0xEF53 >>>>> Filesystem revision #: 1 (dynamic) >>>>> Filesystem features: has_journal resize_inode >>>>> >>>>> >>>> dir_index filetype >>>> >>>> >>>>> needs_reco >>>>> very sparse_super large_file >>>>> Default mount options: (none) >>>>> Filesystem state: clean >>>>> Errors behavior: Continue >>>>> Filesystem OS type: Linux >>>>> Inode count: 10032 >>>>> Block count: 10032 >>>>> Reserved block count: 501 >>>>> Free blocks: 7736 >>>>> Free inodes: 10019 >>>>> First block: 0 >>>>> Block size: 4096 >>>>> Fragment size: 4096 >>>>> Reserved GDT blocks: 2 >>>>> Block size: 4096 >>>>> Fragment size: 4096 >>>>> Reserved GDT blocks: 2 >>>>> Blocks per group: 32768 >>>>> Fragments per group: 32768 >>>>> Inodes per group: 10032 >>>>> Inode blocks per group: 1254 >>>>> Filesystem created: Wed Feb 7 15:04:21 2007 >>>>> Last mount time: Wed Feb 7 15:05:54 2007 >>>>> Last write time: Wed Feb 7 15:05:54 2007 >>>>> Mount count: 3 >>>>> Maximum mount count: 37 >>>>> Last checked: Wed Feb 7 15:04:21 2007 >>>>> Check interval: 15552000 (6 months) >>>>> Next check after: Mon Aug 6 16:04:21 2007 >>>>> Reserved blocks uid: 0 (user root) >>>>> Reserved blocks gid: 0 (group root) >>>>> First inode: 11 >>>>> Inode size: 512 >>>>> Journal inode: 8 >>>>> Default directory hash: tea >>>>> Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf >>>>> Journal backup: inode blocks >>>>> >>>>> >>>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: Gary Every [mailto:gevery@vcommerce.com] >>>>>> Sent: Thursday, February 08, 2007 2:21 PM >>>>>> To: Lin Shen (lshen); lustre-discuss@clusterfs.com >>>>>> Subject: RE: [Lustre-discuss] No space left while running >>>>>> >>>>>> >>>> createmany >>>> >>>> >>>>>> Sounds like you''re running outta inodes >>>>>> >>>>>> Do: tune2fs -l <raw_device> to see how many inodes the thing >>>>>> supports >>>>>> >>>>>> >>>>>> >>>>>> -----Original Message----- >>>>>> From: lustre-discuss-bounces@clusterfs.com >>>>>> [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf >>>>>> >>>>>> >>>> Of Lin Shen >>>> >>>> >>>>>> (lshen) >>>>>> Sent: Thursday, February 08, 2007 3:01 PM >>>>>> To: lustre-discuss@clusterfs.com >>>>>> Subject: [Lustre-discuss] No space left while running createmany >>>>>> >>>>>> I created a lustre file system with MDT on a 32MB >>>>>> >>>>>> >>>> partition and one >>>> >>>> >>>>>> OST on a 480MB partition and mounted the file system on >>>>>> >>>>>> >>>> two nodes. >>>> >>>> >>>>>> While running the createmany test program on the client node, it >>>>>> always stops at 10000 files with a No space left error. But the >>>>>> strange thing is that df shows both partition have lt of >>>>>> >>>>>> >>>> free space. >>>> >>>> >>>>>> Lin >>>>>> >>>>>> _______________________________________________ >>>>>> Lustre-discuss mailing list >>>>>> Lustre-discuss@clusterfs.com >>>>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>>>>> >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> Lustre-discuss mailing list >>>>> Lustre-discuss@clusterfs.com >>>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>>>> >>>>> >>> _______________________________________________ >>> Lustre-discuss mailing list >>> Lustre-discuss@clusterfs.com >>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>> >>> >>>
Lin Shen (lshen)
2007-Feb-12 17:00 UTC
[Lustre-discuss] No space left while running createmany
Yes, it''s 39M. I''m trying to simulate running lustre on a 512M Compact Flash. We''ll use the CF to host Linux root file system, so the number of files could still be big. BTW, there is another inode number associated with OST. What''s that one for? Lin> -----Original Message----- > From: Nathaniel Rutman [mailto:nathan@clusterfs.com] > Sent: Monday, February 12, 2007 3:30 PM > To: Lin Shen (lshen) > Cc: lustre-discuss@clusterfs.com > Subject: Re: [Lustre-discuss] No space left while running createmany > > Your MDS device size is 39MB? You''re not going to have a very big > filesystem. > It wouldn''t surprise me if the inode calcs look weird at the > low end due to fs overhead. > In any case, you can just try the "mkfs.ext2" command that > mkfs.lustre prints out directly yourself, and play with the > values that way. > > BTW, for 1.5.97 you should make your fsname < 8 chars (see bz11564) > > Lin Shen (lshen) wrote: > > > > Just to show that the --mkfsoptions="-i 2048" is not working as > > expected or maybe I''m not doing it right. > > > > First, I did a mkfs on the mdt partition with the default. From the > > command outputs can tell that it''s using 4096 as you described. And > > "lfs df -i" says that there are 7743 inodes created. So far so good. > > > > Then, I did another mkfs on the same partition, and this time I set > > the bytes-per-node to 2048. Supposely, the number of inodes > should double. > > But "lfs df -i" says only 6489 inode are created. It > actually created > > fewer inodes! > > > > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat > > /dev/hda9 > > > > Permanent disk data: > > Target: lustrefs-MDTffff > > Index: unassigned > > Lustre FS: lustrefs > > Mount type: ldiskfs > > Flags: 0x75 > > (MDT MGS needs_index first_time update ) Persistent > > mount > > opts: errors=remount-ro,iopen_nopriv,user_xattr > > Parameters: > > > > device size = 39MB > > formatting backing filesystem ldiskfs on /dev/hda9 > > target name lustrefs-MDTffff > > 4k blocks 0 > > options -i 4096 -I 512 -q -O dir_index -F > > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i > 4096 -I 512 -q > > -O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata > > > > > > [root@cfs6 ~]# lfs df -i > > UUID Inodes IUsed IFree IUse% Mounted on > > lustrefs-MDT0000_UUID 7743 25 7718 0 > > /mnt/lustre/bonnie[MDT > > :0] > > lustrefs-OST0000_UUID 106864 57 106807 0 > > /mnt/lustre/bonnie[OST > > :0] > > > > filesystem summary: 7743 25 7718 0 > > /mnt/lustre/bonnie > > > > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048" > > --mdt --mgs --reformat /dev/hda9 > > > > Permanent disk data: > > Target: lustrefs-MDTffff > > Index: unassigned > > Lustre FS: lustrefs > > Mount type: ldiskfs > > Flags: 0x75 > > (MDT MGS needs_index first_time update ) Persistent > > mount > > opts: errors=remount-ro,iopen_nopriv,user_xattr > > Parameters: > > > > device size = 39MB > > formatting backing filesystem ldiskfs on /dev/hda9 > > target name lustrefs-MDTffff > > 4k blocks 0 > > options -i 2048 -I 512 -q -O dir_index -F > > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 > -I 512 -q > > -O dir_index -F /dev/hda9 Writing CONFIGS/mountdata > > > > > > [root@cfs6 ~]# lfs df -i > > UUID Inodes IUsed IFree IUse% Mounted on > > lustrefs-MDT0000_UUID 6489 25 6464 0 > > /mnt/lustre/bonnie[MDT:0] > > lustrefs-OST0000_UUID 106864 57 106807 0 > > /mnt/lustre/bonnie[OST:0] > > > > filesystem summary: 6489 25 6464 0 > > /mnt/lustre/bonnie > > > > > > > > > >> -----Original Message----- > >> From: Kalpak Shah [mailto:kalpak@clusterfs.com] > >> Sent: Thursday, February 08, 2007 11:10 PM > >> To: Lin Shen (lshen) > >> Cc: Gary Every; lustre-discuss@clusterfs.com > >> Subject: RE: [Lustre-discuss] No space left while running > createmany > >> > >> Hi, > >> > >> I had a look at mke2fs code in e2fsprogs-1.39(since lustre > eventually > >> uses ext3 to create the filesystem) and this is how lustre would > >> create the default number of inodes. > >> > >> For small filesystems(as is your case), it creates a inode > for every > >> 4096 bytes of space on the file system. This can also be > specified by > >> the -i option to mke2fs. So in your case, with a > >> 32 MB partition you would have 32MB/4096 = 8192 inodes by > default. So > >> using a "--mkfsoptions -i 2048" option to mkfs.lustre > would give you > >> 16384 inodes enough to create more than 10000 files. > >> > >> For large filesytems, an inode is created for every 1Mb of > filesystem > >> space and for even for larger filesystems an inode is created for > >> every 4MB of filesystem space. > >> > >> Yes, tune2fs cannot change the number of inodes in your > filesystem. > >> This option can only be set while formatting the filesystem. > >> > >> Regards, > >> Kalpak. > >> > >> > >> On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: > >> > >>> tune2fs on the MDT partition says that there are still free > >>> > >> inodes. In > >> > >>> general, how the default number of inodes is calculated for > >>> > >> a lustre > >> > >>> file system? I guess it can be set by "mkfsoptions", but > >>> > >> not through > >> > >>> tunefs.lustre though. > >>> > >>> > >>> [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 > >>> > >> (28-Feb-2004) > >> > >>> Filesystem volume name: lustrefs-MDT0000 > >>> Last mounted on: <not available> > >>> Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed > >>> Filesystem magic number: 0xEF53 > >>> Filesystem revision #: 1 (dynamic) > >>> Filesystem features: has_journal resize_inode > >>> > >> dir_index filetype > >> > >>> needs_reco > >>> very sparse_super large_file > >>> Default mount options: (none) > >>> Filesystem state: clean > >>> Errors behavior: Continue > >>> Filesystem OS type: Linux > >>> Inode count: 10032 > >>> Block count: 10032 > >>> Reserved block count: 501 > >>> Free blocks: 7736 > >>> Free inodes: 10019 > >>> First block: 0 > >>> Block size: 4096 > >>> Fragment size: 4096 > >>> Reserved GDT blocks: 2 > >>> Block size: 4096 > >>> Fragment size: 4096 > >>> Reserved GDT blocks: 2 > >>> Blocks per group: 32768 > >>> Fragments per group: 32768 > >>> Inodes per group: 10032 > >>> Inode blocks per group: 1254 > >>> Filesystem created: Wed Feb 7 15:04:21 2007 > >>> Last mount time: Wed Feb 7 15:05:54 2007 > >>> Last write time: Wed Feb 7 15:05:54 2007 > >>> Mount count: 3 > >>> Maximum mount count: 37 > >>> Last checked: Wed Feb 7 15:04:21 2007 > >>> Check interval: 15552000 (6 months) > >>> Next check after: Mon Aug 6 16:04:21 2007 > >>> Reserved blocks uid: 0 (user root) > >>> Reserved blocks gid: 0 (group root) > >>> First inode: 11 > >>> Inode size: 512 > >>> Journal inode: 8 > >>> Default directory hash: tea > >>> Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf > >>> Journal backup: inode blocks > >>> > >>> > >>> > >>>> -----Original Message----- > >>>> From: Gary Every [mailto:gevery@vcommerce.com] > >>>> Sent: Thursday, February 08, 2007 2:21 PM > >>>> To: Lin Shen (lshen); lustre-discuss@clusterfs.com > >>>> Subject: RE: [Lustre-discuss] No space left while running > >>>> > >> createmany > >> > >>>> Sounds like you''re running outta inodes > >>>> > >>>> Do: tune2fs -l <raw_device> to see how many inodes the thing > >>>> supports > >>>> > >>>> > >>>> > >>>> -----Original Message----- > >>>> From: lustre-discuss-bounces@clusterfs.com > >>>> [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf > >>>> > >> Of Lin Shen > >> > >>>> (lshen) > >>>> Sent: Thursday, February 08, 2007 3:01 PM > >>>> To: lustre-discuss@clusterfs.com > >>>> Subject: [Lustre-discuss] No space left while running createmany > >>>> > >>>> I created a lustre file system with MDT on a 32MB > >>>> > >> partition and one > >> > >>>> OST on a 480MB partition and mounted the file system on > >>>> > >> two nodes. > >> > >>>> While running the createmany test program on the client node, it > >>>> always stops at 10000 files with a No space left error. But the > >>>> strange thing is that df shows both partition have lt of > >>>> > >> free space. > >> > >>>> Lin > >>>> > >>>> _______________________________________________ > >>>> Lustre-discuss mailing list > >>>> Lustre-discuss@clusterfs.com > >>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > >>>> > >>>> > >>> _______________________________________________ > >>> Lustre-discuss mailing list > >>> Lustre-discuss@clusterfs.com > >>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > >>> > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss@clusterfs.com > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > >
Daniel Leaberry
2007-Feb-13 10:37 UTC
[Lustre-discuss] No space left while running createmany
Daniel Leaberry wrote:> > > Lin Shen (lshen) wrote: >> I believe both single quote and double quote work the same. I can see >> the right parameter is being passed to the underline mkfs.ext2 with >> double quote. >> Anyways, to be careful, I just tried with single quote and still got the >> same result. >> What''s the version you are using? I''m using 1.5.97. >> >> Lin > > I''m using 1.5.97 as well. Sorry, I don''t know what''s wrong with your > setup. All I can say is the mkfsoptions seem to work for me. > DanielActually, I take that back. Since I''d been having issues actually mounting a lustre filesystem (8 character lustrefs bug) I''d just been mounting the mds and then running df -i NOT lfs df -i. Now that I can finally mount up a whole filesystem I run lfs df -i and I do see that the inode count is off. Normal df -i gives me ~940 million inodes (I used --mkfsoptions=''-i 1024'' /dev/sdb) [root@lu-mds01 ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdb 943652864 35 943652829 1% /var/mnt/lustre1-mds lfs df -i gives me ~117 million inodes (which corresponds to the inode count with the normal mkfsoptions of -i 4096. [root@lu-fe01 lustre01.iarchives.com]# lfs df -i UUID Inodes IUsed IFree IUse% Mounted on lustre1-MDT0000_UUID 117768997 36 117768961 0% /mnt/lustre01.iarchives.com[MDT:0] lustre1-OST0000_UUID 244195328 121 244195207 0% /mnt/lustre01.iarchives.com[OST:0] lustre1-OST0001_UUID 244195328 57 244195271 0% /mnt/lustre01.iarchives.com[OST:1] lustre1-OST0002_UUID 244195328 57 244195271 0% /mnt/lustre01.iarchives.com[OST:2] lustre1-OST0003_UUID 244195328 57 244195271 0% /mnt/lustre01.iarchives.com[OST:3] lustre1-OST0004_UUID 244195328 57 244195271 0% /mnt/lustre01.iarchives.com[OST:4] lustre1-OST0005_UUID 244195328 57 244195271 0% /mnt/lustre01.iarchives.com[OST:5] filesystem summary: 117768997 36 117768961 0% /mnt/lustre01.iarchives.com So it looks like the mkfs options are being passed correctly but lustre doesn''t pick it up on mount. Here''s the output of my mds format if it helps [root@lu-mds01 ~]# mkfs.lustre --reformat --fsname=lustre1 --mdt --mgs --mkfsoptions=''-i 1024'' /dev/sdb Permanent disk data: Target: lustre1-MDTffff Index: unassigned Lustre FS: lustre1 Mount type: ldiskfs Flags: 0x75 (MDT MGS needs_index first_time update ) Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr Parameters: device size = 921536MB formatting backing filesystem ldiskfs on /dev/sdb target name lustre1-MDTffff 4k blocks 0 options -i 1024 -J size=400 -I 512 -q -O dir_index -F mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustre1-MDTffff -i 1024 -J size=400 -I 512 -q -O dir_index -F /dev/sdb Daniel> >> >>> -----Original Message----- >>> From: Daniel Leaberry [mailto:dleaberry@iarchives.com] Sent: Monday, >>> February 12, 2007 2:59 PM >>> To: Lin Shen (lshen) >>> Cc: lustre-discuss@clusterfs.com >>> Subject: Re: [Lustre-discuss] No space left while running createmany >>> >>> >>> >>> >>> Lin Shen (lshen) wrote: >>> >>>> >>>> Just to show that the --mkfsoptions="-i 2048" is not working as >>>> expected or maybe I''m not doing it right. >>>> >>>> First, I did a mkfs on the mdt partition with the default. From the >>>> command outputs can tell that it''s using 4096 as you described. And >>>> "lfs df -i" says that there are 7743 inodes created. So far so good. >>>> >>>> Then, I did another mkfs on the same partition, and this time I set >>>> the bytes-per-node to 2048. Supposely, the number of inodes >>> should double. >>> >>>> But "lfs df -i" says only 6489 inode are created. It >>> actually created >>>> fewer inodes! >>>> >>>> [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat >>>> /dev/hda9 >>>> >>>> Permanent disk data: >>>> Target: lustrefs-MDTffff >>>> Index: unassigned >>>> Lustre FS: lustrefs >>>> Mount type: ldiskfs >>>> Flags: 0x75 >>>> (MDT MGS needs_index first_time update ) Persistent >>>> mount >>>> opts: errors=remount-ro,iopen_nopriv,user_xattr >>>> Parameters: >>>> >>>> device size = 39MB >>>> formatting backing filesystem ldiskfs on /dev/hda9 >>>> target name lustrefs-MDTffff >>>> 4k blocks 0 >>>> options -i 4096 -I 512 -q -O dir_index -F >>>> mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i >>> 4096 -I 512 -q >>>> -O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata >>>> >>>> >>>> [root@cfs6 ~]# lfs df -i >>>> UUID Inodes IUsed IFree IUse% Mounted on >>>> lustrefs-MDT0000_UUID 7743 25 7718 0 >>>> /mnt/lustre/bonnie[MDT >>>> :0] >>>> lustrefs-OST0000_UUID 106864 57 106807 0 >>>> /mnt/lustre/bonnie[OST >>>> :0] >>>> >>>> filesystem summary: 7743 25 7718 0 >>>> /mnt/lustre/bonnie >>>> >>>> >>> I have had no issues with mkfsoptions but I single quote it like >>> this mkfs.lustre --fsname=lustre01 --mdt --mgs --mkfsoptions=''-i >>> 1024'' /dev/sdb >>> >>> >>> >>>> [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048" >>>> --mdt --mgs --reformat /dev/hda9 >>>> >>>> Permanent disk data: >>>> Target: lustrefs-MDTffff >>>> Index: unassigned >>>> Lustre FS: lustrefs >>>> Mount type: ldiskfs >>>> Flags: 0x75 >>>> (MDT MGS needs_index first_time update ) Persistent >>>> mount >>>> opts: errors=remount-ro,iopen_nopriv,user_xattr >>>> Parameters: >>>> >>>> device size = 39MB >>>> formatting backing filesystem ldiskfs on /dev/hda9 >>>> target name lustrefs-MDTffff >>>> 4k blocks 0 >>>> options -i 2048 -I 512 -q -O dir_index -F >>>> mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 >>> -I 512 -q >>>> -O dir_index -F /dev/hda9 Writing CONFIGS/mountdata >>>> >>>> >>>> [root@cfs6 ~]# lfs df -i >>>> UUID Inodes IUsed IFree IUse% Mounted on >>>> lustrefs-MDT0000_UUID 6489 25 6464 0 >>>> /mnt/lustre/bonnie[MDT:0] >>>> lustrefs-OST0000_UUID 106864 57 106807 0 >>>> /mnt/lustre/bonnie[OST:0] >>>> >>>> filesystem summary: 6489 25 6464 0 >>>> /mnt/lustre/bonnie >>>> >>>> >>>> >>>> >>>>> -----Original Message----- >>>>> From: Kalpak Shah [mailto:kalpak@clusterfs.com] >>>>> Sent: Thursday, February 08, 2007 11:10 PM >>>>> To: Lin Shen (lshen) >>>>> Cc: Gary Every; lustre-discuss@clusterfs.com >>>>> Subject: RE: [Lustre-discuss] No space left while running >>> createmany >>> >>>>> Hi, >>>>> >>>>> I had a look at mke2fs code in e2fsprogs-1.39(since lustre >>> eventually >>>>> uses ext3 to create the filesystem) and this is how lustre would >>>>> create the default number of inodes. >>>>> >>>>> For small filesystems(as is your case), it creates a inode >>> for every >>> >>>>> 4096 bytes of space on the file system. This can also be >>> specified by >>>>> the -i option to mke2fs. So in your case, with a >>>>> 32 MB partition you would have 32MB/4096 = 8192 inodes by >>> default. So >>>>> using a "--mkfsoptions -i 2048" option to mkfs.lustre >>> would give you >>> >>>>> 16384 inodes enough to create more than 10000 files. >>>>> >>>>> For large filesytems, an inode is created for every 1Mb of >>> filesystem >>>>> space and for even for larger filesystems an inode is created for >>>>> every 4MB of filesystem space. >>>>> >>>>> Yes, tune2fs cannot change the number of inodes in your >>> filesystem. >>>>> This option can only be set while formatting the filesystem. >>>>> >>>>> Regards, >>>>> Kalpak. >>>>> >>>>> >>>>> On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: >>>>> >>>>>> tune2fs on the MDT partition says that there are still free >>>>>> >>>>> inodes. In >>>>> >>>>>> general, how the default number of inodes is calculated for >>>>>> >>>>> a lustre >>>>> >>>>>> file system? I guess it can be set by "mkfsoptions", but >>>>>> >>>>> not through >>>>> >>>>>> tunefs.lustre though. >>>>>> >>>>>> [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 >>>>>> >>>>> (28-Feb-2004) >>>>> >>>>>> Filesystem volume name: lustrefs-MDT0000 >>>>>> Last mounted on: <not available> >>>>>> Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed >>>>>> Filesystem magic number: 0xEF53 >>>>>> Filesystem revision #: 1 (dynamic) >>>>>> Filesystem features: has_journal resize_inode >>>>> dir_index filetype >>>>> >>>>>> needs_reco >>>>>> very sparse_super large_file >>>>>> Default mount options: (none) >>>>>> Filesystem state: clean >>>>>> Errors behavior: Continue >>>>>> Filesystem OS type: Linux >>>>>> Inode count: 10032 >>>>>> Block count: 10032 >>>>>> Reserved block count: 501 >>>>>> Free blocks: 7736 >>>>>> Free inodes: 10019 >>>>>> First block: 0 >>>>>> Block size: 4096 >>>>>> Fragment size: 4096 >>>>>> Reserved GDT blocks: 2 >>>>>> Block size: 4096 >>>>>> Fragment size: 4096 >>>>>> Reserved GDT blocks: 2 >>>>>> Blocks per group: 32768 >>>>>> Fragments per group: 32768 >>>>>> Inodes per group: 10032 >>>>>> Inode blocks per group: 1254 >>>>>> Filesystem created: Wed Feb 7 15:04:21 2007 >>>>>> Last mount time: Wed Feb 7 15:05:54 2007 >>>>>> Last write time: Wed Feb 7 15:05:54 2007 >>>>>> Mount count: 3 >>>>>> Maximum mount count: 37 >>>>>> Last checked: Wed Feb 7 15:04:21 2007 >>>>>> Check interval: 15552000 (6 months) >>>>>> Next check after: Mon Aug 6 16:04:21 2007 >>>>>> Reserved blocks uid: 0 (user root) >>>>>> Reserved blocks gid: 0 (group root) >>>>>> First inode: 11 >>>>>> Inode size: 512 >>>>>> Journal inode: 8 >>>>>> Default directory hash: tea >>>>>> Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf >>>>>> Journal backup: inode blocks >>>>>> >>>>>> >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Gary Every [mailto:gevery@vcommerce.com] >>>>>>> Sent: Thursday, February 08, 2007 2:21 PM >>>>>>> To: Lin Shen (lshen); lustre-discuss@clusterfs.com >>>>>>> Subject: RE: [Lustre-discuss] No space left while running >>>>>>> >>>>> createmany >>>>> >>>>>>> Sounds like you''re running outta inodes >>>>>>> >>>>>>> Do: tune2fs -l <raw_device> to see how many inodes the thing >>>>>>> supports >>>>>>> >>>>>>> >>>>>>> >>>>>>> -----Original Message----- >>>>>>> From: lustre-discuss-bounces@clusterfs.com >>>>>>> [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf >>>>>>> >>>>> Of Lin Shen >>>>> >>>>>>> (lshen) >>>>>>> Sent: Thursday, February 08, 2007 3:01 PM >>>>>>> To: lustre-discuss@clusterfs.com >>>>>>> Subject: [Lustre-discuss] No space left while running createmany >>>>>>> >>>>>>> I created a lustre file system with MDT on a 32MB >>>>>>> >>>>> partition and one >>>>> >>>>>>> OST on a 480MB partition and mounted the file system on >>>>>>> >>>>> two nodes. >>>>>>> While running the createmany test program on the client node, it >>>>>>> always stops at 10000 files with a No space left error. But the >>>>>>> strange thing is that df shows both partition have lt of >>>>>>> >>>>> free space. >>>>> >>>>>>> Lin >>>>>>> _______________________________________________ >>>>>>> Lustre-discuss mailing list >>>>>>> Lustre-discuss@clusterfs.com >>>>>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>>>>>> >>>>>>> >>>>>> _______________________________________________ >>>>>> Lustre-discuss mailing list >>>>>> Lustre-discuss@clusterfs.com >>>>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>>>>> >>>> _______________________________________________ >>>> Lustre-discuss mailing list >>>> Lustre-discuss@clusterfs.com >>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >>>> >>>> > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >
Andreas Dilger
2007-Feb-13 22:03 UTC
[Lustre-discuss] No space left while running createmany
On Feb 13, 2007 10:36 -0700, Daniel Leaberry wrote:> Normal df -i gives me ~940 million inodes (I used --mkfsoptions=''-i > 1024'' /dev/sdb) > > [root@lu-mds01 ~]# df -i > Filesystem Inodes IUsed IFree IUse% Mounted on > /dev/sdb 943652864 35 943652829 1% /var/mnt/lustre1-mds > > > lfs df -i gives me ~117 million inodes (which corresponds to the inode > count with the normal mkfsoptions of -i 4096.That is because "df" and "df -i" are necessary amalgamations of the MDS and OST block and inode counts. The actual number of files you can create is at least 90M, but could be as high as 117M depending on a number of factors. The same is true with free blocks. We thought it best to report the minimum amount of free inodes instead of the maximum.> [root@lu-fe01 lustre01.iarchives.com]# lfs df -i > UUID Inodes IUsed IFree IUse% Mounted on > lustre1-MDT0000_UUID 117768997 36 117768961 0% > /mnt/lustre01.iarchives.com[MDT:0] > lustre1-OST0000_UUID 244195328 121 244195207 0% > /mnt/lustre01.iarchives.com[OST:0] > lustre1-OST0001_UUID 244195328 57 244195271 0% > /mnt/lustre01.iarchives.com[OST:1] > lustre1-OST0002_UUID 244195328 57 244195271 0% > /mnt/lustre01.iarchives.com[OST:2] > lustre1-OST0003_UUID 244195328 57 244195271 0% > /mnt/lustre01.iarchives.com[OST:3] > lustre1-OST0004_UUID 244195328 57 244195271 0% > /mnt/lustre01.iarchives.com[OST:4] > lustre1-OST0005_UUID 244195328 57 244195271 0% > /mnt/lustre01.iarchives.com[OST:5] > > filesystem summary: 117768997 36 117768961 0% > /mnt/lustre01.iarchives.comThat''s why we have "lfs df" so that you can see the per-device info instead of the munged "df" output. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
Andreas Dilger
2007-Feb-13 22:17 UTC
[Lustre-discuss] No space left while running createmany
On Feb 12, 2007 15:59 -0800, Lin Shen (lshen) wrote:> Yes, it''s 39M. I''m trying to simulate running lustre on a 512M Compact > Flash. We''ll use the CF to host Linux root file system, so the number of > files could still be big.For a 39MB filesystem you only get about 39000 inodes according to my test with "mke2fs -j -b 4096 -i 1024 -I 512". That makes sense - you have about 39000kB and you are asking for 1 inode per kilobyte. With "mke2fs -j -b 4096 -m 1 -I 512 -N 64000" you can (not surprisingly) get 64000 inodes in the filesystem, but at that point it is very close to running out of free blocks in the filesystem and you may have problems e.g. creating directories or storing Lustre-internal metadata. Lustre isn''t really designed to be efficient for very small filesystems. It is designed to be good at scaling to very large filesystems.> BTW, there is another inode number associated with OST. What''s that one for?The OSTs are separate filesystems. Each one has its own inodes.> > -----Original Message----- > > From: Nathaniel Rutman [mailto:nathan@clusterfs.com] > > Sent: Monday, February 12, 2007 3:30 PM > > To: Lin Shen (lshen) > > Cc: lustre-discuss@clusterfs.com > > Subject: Re: [Lustre-discuss] No space left while running createmany > > > > Your MDS device size is 39MB? You''re not going to have a very big > > filesystem. > > It wouldn''t surprise me if the inode calcs look weird at the > > low end due to fs overhead. > > In any case, you can just try the "mkfs.ext2" command that > > mkfs.lustre prints out directly yourself, and play with the > > values that way. > > > > BTW, for 1.5.97 you should make your fsname < 8 chars (see bz11564) > > > > Lin Shen (lshen) wrote: > > > > > > Just to show that the --mkfsoptions="-i 2048" is not working as > > > expected or maybe I''m not doing it right. > > > > > > First, I did a mkfs on the mdt partition with the default. From the > > > command outputs can tell that it''s using 4096 as you described. And > > > "lfs df -i" says that there are 7743 inodes created. So far so good. > > > > > > Then, I did another mkfs on the same partition, and this time I set > > > the bytes-per-node to 2048. Supposely, the number of inodes > > should double. > > > But "lfs df -i" says only 6489 inode are created. It > > actually created > > > fewer inodes! > > > > > > > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mdt --mgs --reformat > > > /dev/hda9 > > > > > > Permanent disk data: > > > Target: lustrefs-MDTffff > > > Index: unassigned > > > Lustre FS: lustrefs > > > Mount type: ldiskfs > > > Flags: 0x75 > > > (MDT MGS needs_index first_time update ) Persistent > > > mount > > > opts: errors=remount-ro,iopen_nopriv,user_xattr > > > Parameters: > > > > > > device size = 39MB > > > formatting backing filesystem ldiskfs on /dev/hda9 > > > target name lustrefs-MDTffff > > > 4k blocks 0 > > > options -i 4096 -I 512 -q -O dir_index -F > > > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i > > 4096 -I 512 -q > > > -O dir_in dex -F /dev/hda9 Writing CONFIGS/mountdata > > > > > > > > > [root@cfs6 ~]# lfs df -i > > > UUID Inodes IUsed IFree IUse% Mounted on > > > lustrefs-MDT0000_UUID 7743 25 7718 0 > > > /mnt/lustre/bonnie[MDT > > > :0] > > > lustrefs-OST0000_UUID 106864 57 106807 0 > > > /mnt/lustre/bonnie[OST > > > :0] > > > > > > filesystem summary: 7743 25 7718 0 > > > /mnt/lustre/bonnie > > > > > > > > > [root@cfs6 ~]# mkfs.lustre --fsname=lustrefs --mkfsoptions="-i 2048" > > > --mdt --mgs --reformat /dev/hda9 > > > > > > Permanent disk data: > > > Target: lustrefs-MDTffff > > > Index: unassigned > > > Lustre FS: lustrefs > > > Mount type: ldiskfs > > > Flags: 0x75 > > > (MDT MGS needs_index first_time update ) Persistent > > > mount > > > opts: errors=remount-ro,iopen_nopriv,user_xattr > > > Parameters: > > > > > > device size = 39MB > > > formatting backing filesystem ldiskfs on /dev/hda9 > > > target name lustrefs-MDTffff > > > 4k blocks 0 > > > options -i 2048 -I 512 -q -O dir_index -F > > > mkfs_cmd = mkfs.ext2 -j -b 4096 -L lustrefs-MDTffff -i 2048 > > -I 512 -q > > > -O dir_index -F /dev/hda9 Writing CONFIGS/mountdata > > > > > > > > > [root@cfs6 ~]# lfs df -i > > > UUID Inodes IUsed IFree IUse% Mounted on > > > lustrefs-MDT0000_UUID 6489 25 6464 0 > > > /mnt/lustre/bonnie[MDT:0] > > > lustrefs-OST0000_UUID 106864 57 106807 0 > > > /mnt/lustre/bonnie[OST:0] > > > > > > filesystem summary: 6489 25 6464 0 > > > /mnt/lustre/bonnie > > > > > > > > > > > > > > >> -----Original Message----- > > >> From: Kalpak Shah [mailto:kalpak@clusterfs.com] > > >> Sent: Thursday, February 08, 2007 11:10 PM > > >> To: Lin Shen (lshen) > > >> Cc: Gary Every; lustre-discuss@clusterfs.com > > >> Subject: RE: [Lustre-discuss] No space left while running > > createmany > > >> > > >> Hi, > > >> > > >> I had a look at mke2fs code in e2fsprogs-1.39(since lustre > > eventually > > >> uses ext3 to create the filesystem) and this is how lustre would > > >> create the default number of inodes. > > >> > > >> For small filesystems(as is your case), it creates a inode > > for every > > >> 4096 bytes of space on the file system. This can also be > > specified by > > >> the -i option to mke2fs. So in your case, with a > > >> 32 MB partition you would have 32MB/4096 = 8192 inodes by > > default. So > > >> using a "--mkfsoptions -i 2048" option to mkfs.lustre > > would give you > > >> 16384 inodes enough to create more than 10000 files. > > >> > > >> For large filesytems, an inode is created for every 1Mb of > > filesystem > > >> space and for even for larger filesystems an inode is created for > > >> every 4MB of filesystem space. > > >> > > >> Yes, tune2fs cannot change the number of inodes in your > > filesystem. > > >> This option can only be set while formatting the filesystem. > > >> > > >> Regards, > > >> Kalpak. > > >> > > >> > > >> On Thu, 2007-02-08 at 17:14 -0800, Lin Shen (lshen) wrote: > > >> > > >>> tune2fs on the MDT partition says that there are still free > > >>> > > >> inodes. In > > >> > > >>> general, how the default number of inodes is calculated for > > >>> > > >> a lustre > > >> > > >>> file system? I guess it can be set by "mkfsoptions", but > > >>> > > >> not through > > >> > > >>> tunefs.lustre though. > > >>> > > >>> > > >>> [root@cfs4 ~]# tune2fs -l /dev/hda10 | more tune2fs 1.35 > > >>> > > >> (28-Feb-2004) > > >> > > >>> Filesystem volume name: lustrefs-MDT0000 > > >>> Last mounted on: <not available> > > >>> Filesystem UUID: 77726e31-c4ac-4244-b71d-396a98e1c2ed > > >>> Filesystem magic number: 0xEF53 > > >>> Filesystem revision #: 1 (dynamic) > > >>> Filesystem features: has_journal resize_inode > > >>> > > >> dir_index filetype > > >> > > >>> needs_reco > > >>> very sparse_super large_file > > >>> Default mount options: (none) > > >>> Filesystem state: clean > > >>> Errors behavior: Continue > > >>> Filesystem OS type: Linux > > >>> Inode count: 10032 > > >>> Block count: 10032 > > >>> Reserved block count: 501 > > >>> Free blocks: 7736 > > >>> Free inodes: 10019 > > >>> First block: 0 > > >>> Block size: 4096 > > >>> Fragment size: 4096 > > >>> Reserved GDT blocks: 2 > > >>> Block size: 4096 > > >>> Fragment size: 4096 > > >>> Reserved GDT blocks: 2 > > >>> Blocks per group: 32768 > > >>> Fragments per group: 32768 > > >>> Inodes per group: 10032 > > >>> Inode blocks per group: 1254 > > >>> Filesystem created: Wed Feb 7 15:04:21 2007 > > >>> Last mount time: Wed Feb 7 15:05:54 2007 > > >>> Last write time: Wed Feb 7 15:05:54 2007 > > >>> Mount count: 3 > > >>> Maximum mount count: 37 > > >>> Last checked: Wed Feb 7 15:04:21 2007 > > >>> Check interval: 15552000 (6 months) > > >>> Next check after: Mon Aug 6 16:04:21 2007 > > >>> Reserved blocks uid: 0 (user root) > > >>> Reserved blocks gid: 0 (group root) > > >>> First inode: 11 > > >>> Inode size: 512 > > >>> Journal inode: 8 > > >>> Default directory hash: tea > > >>> Directory Hash Seed: 9b6b9ef5-7a3e-48e3-9871-63b91a60cbdf > > >>> Journal backup: inode blocks > > >>> > > >>> > > >>> > > >>>> -----Original Message----- > > >>>> From: Gary Every [mailto:gevery@vcommerce.com] > > >>>> Sent: Thursday, February 08, 2007 2:21 PM > > >>>> To: Lin Shen (lshen); lustre-discuss@clusterfs.com > > >>>> Subject: RE: [Lustre-discuss] No space left while running > > >>>> > > >> createmany > > >> > > >>>> Sounds like you''re running outta inodes > > >>>> > > >>>> Do: tune2fs -l <raw_device> to see how many inodes the thing > > >>>> supports > > >>>> > > >>>> > > >>>> > > >>>> -----Original Message----- > > >>>> From: lustre-discuss-bounces@clusterfs.com > > >>>> [mailto:lustre-discuss-bounces@clusterfs.com] On Behalf > > >>>> > > >> Of Lin Shen > > >> > > >>>> (lshen) > > >>>> Sent: Thursday, February 08, 2007 3:01 PM > > >>>> To: lustre-discuss@clusterfs.com > > >>>> Subject: [Lustre-discuss] No space left while running createmany > > >>>> > > >>>> I created a lustre file system with MDT on a 32MB > > >>>> > > >> partition and one > > >> > > >>>> OST on a 480MB partition and mounted the file system on > > >>>> > > >> two nodes. > > >> > > >>>> While running the createmany test program on the client node, it > > >>>> always stops at 10000 files with a No space left error. But the > > >>>> strange thing is that df shows both partition have lt of > > >>>> > > >> free space. > > >> > > >>>> Lin > > >>>> > > >>>> _______________________________________________ > > >>>> Lustre-discuss mailing list > > >>>> Lustre-discuss@clusterfs.com > > >>>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > >>>> > > >>>> > > >>> _______________________________________________ > > >>> Lustre-discuss mailing list > > >>> Lustre-discuss@clusterfs.com > > >>> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > >>> > > > > > > _______________________________________________ > > > Lustre-discuss mailing list > > > Lustre-discuss@clusterfs.com > > > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss > > > > > > > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discussCheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.