Hi all, Do you know how to change the capacity of inode space? I am looking that it has already reached 72% besides my supporting Lustre. I want to know where this setting parameters is. And what is default parameter? Of cource, I know that the number of inodes depends on file size in Lustre file system. Could you advice me how to increase inodes capacity? If possible, I want to change this setting without stopping service. $ df -ih /work Filesystem Inodes IUsed IFree IUse% Mounted on 10.12.200.1 at o2ib:10.12.200.2 at o2ib:/lfs 118M 85M 34M 72% /work $ df -h /work Filesystem Size Used Avail Use% Mounted on 10.12.200.1 at o2ib:10.12.200.2 at o2ib:/lfs 788T 74T 675T 10% /work My Lustre is, lustre-tests-1.6.5.1-2.6.18_53.1.6.el5_PAPI_200808121629 lustre-1.6.5.1-2.6.18_53.1.6.el5_PAPI_200808121629 lustre-source-1.6.5.1-2.6.18_53.1.6.el5_PAPI_200808121629 lustre-modules-1.6.5.1-2.6.18_53.1.6.el5_PAPI_200808121629 It has been built on RedHat EL 5.1 kernel 2.6.18-53.1.6.el5-PAPI. Best regards, Satoshi Isono
On Thu, 2009-01-29 at 13:32 +0900, Satoshi Isono wrote:> Hi all, > > Do you know how to change the capacity of inode space?Sorry, you can''t. Up-front planning is very important in this aspect of deploying Lustre. In the future, adding MDTs will increase inode capacity, but currently only a single MDT is supported. I *think* growing the size of the physical device and then using resize2fs to increase an ext3(4?) filesystem will increase inode count. I am however, doubtful that resize2fs will work with our ldiskfs filesystems. Perhaps resize2fs from an ext4 supporting version of e2fsprogs will support doing that. I have no idea -- never tried it. That said, doing this is a completely unsupported operation. I don''t know anyone that has even tried it. If you want to go this route, do a lot of testing and make sure you have backups. It will probably be easier and cheaper in the long run to just build a new MDT that meets your revised requirements. b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20090129/ca0dc479/attachment.bin
Hi Brian, Thank you for your reply. I inserted my comment into your lines. please refer it. At 09/01/30???04:36, Brian J. Murrell wrote:>On Thu, 2009-01-29 at 13:32 +0900, Satoshi Isono wrote: > > Hi all, > > > > Do you know how to change the capacity of inode space? > >Sorry, you can''t. Up-front planning is very important in this aspect of >deploying Lustre. In the future, adding MDTs will increase inode >capacity, but currently only a single MDT is supported.Current Lustre version is NOT able to multiple MDT. Is it right? I found the article in Lustre FAQ. <http://wiki.lustre.org/index.php?title=Lustre_FAQ>http://wiki.lustre.org/index.php?title=Lustre_FAQ * What is the maximum number of files in a single file system? In a single directory? So, if we use current Lustre 1.6.x on EXT3, we can only support single MDT. Then, according to the limitation of the number of inodes, we are able to use inodes up to 4 billion. This means that a Lustre consisted on EXT3 can support 4 million files. Is it correct?>I *think* growing the size of the physical device and then using >resize2fs to increase an ext3(4?) filesystem will increase inode count. >I am however, doubtful that resize2fs will work with our ldiskfs >filesystems. Perhaps resize2fs from an ext4 supporting version of >e2fsprogs will support doing that. I have no idea -- never tried it.Another question to you, When changing #inodes into maximum number, are there any demerit/un-merit points? I want to know the tradeoff changing #inodes. In my site, the total OST capacity is 800TB and size of MDT is 123207680 (123 million). What do you think about the number of inodes which I will change?>That said, doing this is a completely unsupported operation. I don''t >know anyone that has even tried it. If you want to go this route, do a >lot of testing and make sure you have backups. > >It will probably be easier and cheaper in the long run to just build a >new MDT that meets your revised requirements.I understand. Of course, I am going to choose more safety way to change #inodes. I appreciate your advice. Satoshi Isono>b. > >_______________________________________________ >Lustre-discuss mailing list >Lustre-discuss at lists.lustre.org >http://lists.lustre.org/mailman/listinfo/lustre-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20090130/61c809e7/attachment.html
Greetings Satoshi, I got more inodes by upgrading the size of the disk for the MDT. I backed up the original MDT using the procedure described in Lustre 1.6 Operations Manual (version 1.12 May 2008--not the most-recent, I know). After having backed up the original MDT I restored it to a larger disk and mounted that disk with the MDT location of the original. Good luck, megan
On Fri, 2009-01-30 at 11:58 +0900, Satoshi Isono wrote:> Hi Brian,Hi.> Current Lustre version is NOT able to multiple MDT. Is it right?This is correct.> I found the article in Lustre FAQ. > > http://wiki.lustre.org/index.php?title=Lustre_FAQ > * What is the maximum number of files in a single file system? In a > single directory? > > So, if we use current Lustre 1.6.x on EXT3, we can only support single > MDT. Then, according to the limitation of the number of inodes, we are > able to use inodes up to 4 billion.Hrm. To get 4 billion inodes out an 8TB device (8TB is the current limit on the size of a Lustre target) you''d need to be using 2k blocks: 8*1024^4/(4*1024^3) 2048 The default is 4k blocks (so that means in order to use 2k blocks you will need to specify that in your MDT format command -- man mkfs.lustre and man mkfs.ext3) which would yield only 2 billion inodes out of the maximum 8TB Lustre device. A 2k block size is certainly usable as long as you didn''t want to too-widely-stripe files. Providing for the maximum striping of 160 stripes is why we allocate 4k blocks. It takes a 4k inode to hold the striping info for 160 stripes.> This means that a Lustre consisted on EXT3 can support 4 million > files.4 _b_illion files, not 4 million and as long as you can utilize 2k inodes, yes.> Another question to you, When changing #inodes into maximum number, > are there any demerit/un-merit points?Reducing the block size to 2k does have some performance issues.> I want to know the tradeoff changing #inodes. In my site, the total > OST capacity is 800TB and size of MDT is 123207680 (123 million). What > do you think about the number of inodes which I will change?That is so completely subjective to what you are storing in the filesystem. I could not even try to make a comment.> I understand. Of course, I am going to choose more safety way to > change #inodes.FWIW, one of our engineers reports having used resize2fs (offline) on a lustre device successfully in the past. It''s still a completely unsupported operation however and you must proceed on that path with all caution should you choose it. b. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20090130/7e711448/attachment.bin
Hello! On Jan 29, 2009, at 9:58 PM, Satoshi Isono wrote:> http://wiki.lustre.org/index.php?title=Lustre_FAQ > * What is the maximum number of files in a single file system? In > a single directory? > > So, if we use current Lustre 1.6.x on EXT3, we can only support > single MDT. Then, according to the limitation of the number of > inodes, we are able to use inodes up to 4 billion. This means that a > Lustre consisted on EXT3 can support 4 million files. Is it correct? >No. It is not 4 million, it is 4 billion files (in US billion means 1000 times million (10?), so 1,000,000,000).>> I *think* growing the size of the physical device and then using >> resize2fs to increase an ext3(4?) filesystem will increase inode >> count. >> I am however, doubtful that resize2fs will work with our ldiskfs >> filesystems. Perhaps resize2fs from an ext4 supporting version of >> e2fsprogs will support doing that. I have no idea -- never tried it. > Another question to you, When changing #inodes into maximum number, > are there any demerit/un-merit points? I want to know the tradeoff > changing #inodes. In my site, the total OST capacity is 800TB and > size of MDT is 123207680 (123 million). What do you think about the > number of inodes which I will change?The more inodes you have the less space free is left on the MDT filesystem. Of course the space on MDT is rarely used. Another downside is the more inodes you have is the slower e2fsck would work on such a filesystem should you ever encounter any fs-problems. There is another fs-wide limit lustre currently imposes - it assumes that you cannot have more files (and hence inodes) then you have (4k) blocks on your OSTs. Bye, Oleg