Hello, When I run a "df -i" in my clients I get 95% indes used or 5% inodes free: Filesystem Inodes IUsed IFree IUse% Mounted on lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95% /mnt/data But if I run lfs df -i i get: UUID Inodes IUsed IFree I Use% Mounted on cetafs-MDT0000_UUID 975470592 20949223 954521369 2% /mnt/data[MDT:0] cetafs-OST0000_UUID 19073280 17822213 1251067 93% /mnt/data[OST:0] cetafs-OST0001_UUID 19073280 17822532 1250748 93% /mnt/data[OST:1] cetafs-OST0002_UUID 19073280 17822560 1250720 93% /mnt/data[OST:2] cetafs-OST0003_UUID 19073280 17822622 1250658 93% /mnt/data[OST:3] cetafs-OST0004_UUID 19073280 17822181 1251099 93% /mnt/data[OST:4] cetafs-OST0005_UUID 19073280 17822769 1250511 93% /mnt/data[OST:5] cetafs-OST0006_UUID 19073280 17822378 1250902 93% /mnt/data[OST:6] cetafs-OST0007_UUID 19073280 17822131 1251149 93% /mnt/data[OST:7] cetafs-OST0008_UUID 19073280 17822419 1250861 93% /mnt/data[OST:8] cetafs-OST0009_UUID 19073280 17822151 1251129 93% /mnt/data[OST:9] cetafs-OST000a_UUID 19073280 17822894 1250386 93% /mnt/data[OST:10] cetafs-OST000b_UUID 19073280 17822328 1250952 93% /mnt/data[OST:11] cetafs-OST000c_UUID 19073280 17822388 1250892 93% /mnt/data[OST:12] cetafs-OST000d_UUID 19073280 17822336 1250944 93% /mnt/data[OST:13] cetafs-OST000e_UUID 19073280 17822139 1251141 93% /mnt/data[OST:14] cetafs-OST000f_UUID 19073280 17823451 1249829 93% /mnt/data[OST:15] cetafs-OST0010_UUID 19073280 17822354 1250926 93% /mnt/data[OST:16] cetafs-OST0011_UUID 19073280 17822676 1250604 93% /mnt/data[OST:17] filesystem summary: 975470592 20949223 954521369 2% /mnt/data I have a 2Tb for MDT wich only 87Gb used. Any suggestion? -- /Alfonso Pardo D?az *Researcher / System Administrator at CETA-Ciemat* c/ Sola n? 1; 10200 Trujillo, ESPA?A Tel: +34 927 65 93 17 Fax: +34 927 32 32 37 CETA-Ciemat logo <http://www.ceta-ciemat.es/>/ ---------------------------- Confidencialidad: Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener informaci?n privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, divulgaci?n y/o copia sin autorizaci?n est? prohibida en virtud de la legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente respondiendo al mensaje y proceda a su destrucci?n. Disclaimer: This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately. ---------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20120927/c5e66ff5/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 26213 bytes Desc: not available Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20120927/c5e66ff5/attachment-0001.png
Enrico Tagliavini
2012-Sep-27 12:26 UTC
[Lustre-discuss] [wc-discuss] Bad reporting inodes free
Disclaimer: I''m not 100% sure if this guess is correct, so please correct me if I''m wrong :). The amount of available inodes is not limited only by the MDT size. You datas is physically on OSTs on ldiskfs, so an ext3/4 mod. ldiskfs has inodes limits as a normal ext4. It is true that you can''t create more inodes then the MDT limit, but at the same time OSTs can full their inodes too even if the MDT is far from full. In other words you have two limiting factors for inodes: the total number of inode supported by your MDT(s), and the sum of the OSTs. The solution is to add more OSTs or to format them with more inodes. I have no idea if you can change it without formatting them. If I recall correctly in ext4 is not possible. Regards On Thu, Sep 27, 2012 at 2:17 PM, Alfonso Pardo <alfonso.pardo at ciemat.es>wrote:> Hello, > > When I run a "df -i" in my clients I get 95% indes used or 5% inodes free: > > Filesystem Inodes > IUsed IFree IUse% Mounted on > lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95% > /mnt/data > > > > But if I run lfs df -i i get: > > UUID Inodes IUsed IFree > I Use% Mounted on > cetafs-MDT0000_UUID 975470592 20949223 954521369 2% > /mnt/data[MDT:0] > cetafs-OST0000_UUID 19073280 17822213 1251067 93% > /mnt/data[OST:0] > cetafs-OST0001_UUID 19073280 17822532 1250748 93% > /mnt/data[OST:1] > cetafs-OST0002_UUID 19073280 17822560 1250720 93% > /mnt/data[OST:2] > cetafs-OST0003_UUID 19073280 17822622 1250658 93% > /mnt/data[OST:3] > cetafs-OST0004_UUID 19073280 17822181 1251099 93% > /mnt/data[OST:4] > cetafs-OST0005_UUID 19073280 17822769 1250511 93% > /mnt/data[OST:5] > cetafs-OST0006_UUID 19073280 17822378 1250902 93% > /mnt/data[OST:6] > cetafs-OST0007_UUID 19073280 17822131 1251149 93% > /mnt/data[OST:7] > cetafs-OST0008_UUID 19073280 17822419 1250861 93% > /mnt/data[OST:8] > cetafs-OST0009_UUID 19073280 17822151 1251129 93% > /mnt/data[OST:9] > cetafs-OST000a_UUID 19073280 17822894 1250386 93% > /mnt/data[OST:10] > cetafs-OST000b_UUID 19073280 17822328 1250952 93% > /mnt/data[OST:11] > cetafs-OST000c_UUID 19073280 17822388 1250892 93% > /mnt/data[OST:12] > cetafs-OST000d_UUID 19073280 17822336 1250944 93% > /mnt/data[OST:13] > cetafs-OST000e_UUID 19073280 17822139 1251141 93% > /mnt/data[OST:14] > cetafs-OST000f_UUID 19073280 17823451 1249829 93% > /mnt/data[OST:15] > cetafs-OST0010_UUID 19073280 17822354 1250926 93% > /mnt/data[OST:16] > cetafs-OST0011_UUID 19073280 17822676 1250604 93% > /mnt/data[OST:17] > > filesystem summary: 975470592 20949223 954521369 2% /mnt/data > > I have a 2Tb for MDT wich only 87Gb used. > > > Any suggestion? > > -- > > *Alfonso Pardo D?az > Researcher / System Administrator at CETA-Ciemat > c/ Sola n? 1; 10200 Trujillo, ESPA?A > Tel: +34 927 65 93 17 Fax: +34 927 32 32 37 > [image: CETA-Ciemat logo] <http://www.ceta-ciemat.es/>* > ---------------------------- Confidencialidad: Este mensaje y sus > ficheros adjuntos se dirige exclusivamente a su destinatario y puede > contener informaci?n privilegiada o confidencial. Si no es vd. el > destinatario indicado, queda notificado de que la utilizaci?n, divulgaci?n > y/o copia sin autorizaci?n est? prohibida en virtud de la legislaci?n > vigente. Si ha recibido este mensaje por error, le rogamos que nos lo > comunique inmediatamente respondiendo al mensaje y proceda a su > destrucci?n. Disclaimer: This message and its attached files is intended > exclusively for its recipients and may contain confidential information. If > you received this e-mail in error you are hereby notified that any > dissemination, copy or disclosure of this communication is strictly > prohibited and may be unlawful. In this case, please notify us by a reply > and delete this email and its contents immediately. > ---------------------------- >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20120927/d61f4567/attachment.html
Alfonso Pardo
2012-Sep-28 07:03 UTC
[Lustre-discuss] [wc-discuss] Bad reporting inodes free
If I am need more inodes in my OST, I have a big trouble!, becouse I will need format all OST in my production storage environment. Any ideas to increase the inodes number in my OST without formating? Thanks!!! On 27/09/12 14:26, Enrico Tagliavini wrote:> Disclaimer: I''m not 100% sure if this guess is correct, so please > correct me if I''m wrong :). > > The amount of available inodes is not limited only by the MDT size. > You datas is physically on OSTs on ldiskfs, so an ext3/4 mod. ldiskfs > has inodes limits as a normal ext4. It is true that you can''t create > more inodes then the MDT limit, but at the same time OSTs can full > their inodes too even if the MDT is far from full. In other words you > have two limiting factors for inodes: the total number of inode > supported by your MDT(s), and the sum of the OSTs. > > The solution is to add more OSTs or to format them with more inodes. I > have no idea if you can change it without formatting them. If I recall > correctly in ext4 is not possible. > > Regards > Enrico > > On Thu, Sep 27, 2012 at 2:17 PM, Alfonso Pardo > <alfonso.pardo at ciemat.es <mailto:alfonso.pardo at ciemat.es>> wrote: > > Hello, > > When I run a "df -i" in my clients I get 95% indes used or 5% > inodes free: > > Filesystem Inodes > IUsed IFree IUse% Mounted on > lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 > 95% /mnt/data > > > > But if I run lfs df -i i get: > > UUID Inodes > IUsed IFree I Use% Mounted on > cetafs-MDT0000_UUID 975470592 20949223 954521369 2% > /mnt/data[MDT:0] > cetafs-OST0000_UUID 19073280 17822213 1251067 93% > /mnt/data[OST:0] > cetafs-OST0001_UUID 19073280 17822532 1250748 93% > /mnt/data[OST:1] > cetafs-OST0002_UUID 19073280 17822560 1250720 93% > /mnt/data[OST:2] > cetafs-OST0003_UUID 19073280 17822622 1250658 93% > /mnt/data[OST:3] > cetafs-OST0004_UUID 19073280 17822181 1251099 93% > /mnt/data[OST:4] > cetafs-OST0005_UUID 19073280 17822769 1250511 93% > /mnt/data[OST:5] > cetafs-OST0006_UUID 19073280 17822378 1250902 93% > /mnt/data[OST:6] > cetafs-OST0007_UUID 19073280 17822131 1251149 93% > /mnt/data[OST:7] > cetafs-OST0008_UUID 19073280 17822419 1250861 93% > /mnt/data[OST:8] > cetafs-OST0009_UUID 19073280 17822151 1251129 93% > /mnt/data[OST:9] > cetafs-OST000a_UUID 19073280 17822894 1250386 93% > /mnt/data[OST:10] > cetafs-OST000b_UUID 19073280 17822328 1250952 93% > /mnt/data[OST:11] > cetafs-OST000c_UUID 19073280 17822388 1250892 93% > /mnt/data[OST:12] > cetafs-OST000d_UUID 19073280 17822336 1250944 93% > /mnt/data[OST:13] > cetafs-OST000e_UUID 19073280 17822139 1251141 93% > /mnt/data[OST:14] > cetafs-OST000f_UUID 19073280 17823451 1249829 93% > /mnt/data[OST:15] > cetafs-OST0010_UUID 19073280 17822354 1250926 93% > /mnt/data[OST:16] > cetafs-OST0011_UUID 19073280 17822676 1250604 93% > /mnt/data[OST:17] > > filesystem summary: 975470592 20949223 954521369 2% > /mnt/data > > I have a 2Tb for MDT wich only 87Gb used. > > > Any suggestion? > > -- > > /Alfonso Pardo D?az > *Researcher / System Administrator at CETA-Ciemat* > c/ Sola n? 1; 10200 Trujillo, ESPA?A > Tel: +34 927 65 93 17 Fax: +34 927 32 32 37 > CETA-Ciemat logo <http://www.ceta-ciemat.es/>/ > > ---------------------------- Confidencialidad: Este mensaje y sus > ficheros adjuntos se dirige exclusivamente a su destinatario y > puede contener informaci?n privilegiada o confidencial. Si no es > vd. el destinatario indicado, queda notificado de que la > utilizaci?n, divulgaci?n y/o copia sin autorizaci?n est? prohibida > en virtud de la legislaci?n vigente. Si ha recibido este mensaje > por error, le rogamos que nos lo comunique inmediatamente > respondiendo al mensaje y proceda a su destrucci?n. Disclaimer: > This message and its attached files is intended exclusively for > its recipients and may contain confidential information. If you > received this e-mail in error you are hereby notified that any > dissemination, copy or disclosure of this communication is > strictly prohibited and may be unlawful. In this case, please > notify us by a reply and delete this email and its contents > immediately. ---------------------------- > >-- /Alfonso Pardo D?az *Researcher / System Administrator at CETA-Ciemat* c/ Sola n? 1; 10200 Trujillo, ESPA?A Tel: +34 927 65 93 17 Fax: +34 927 32 32 37 CETA-Ciemat logo <http://www.ceta-ciemat.es/>/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20120928/116eb0e1/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 26213 bytes Desc: not available Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20120928/116eb0e1/attachment-0001.png
Johann Lombardi
2012-Sep-28 08:15 UTC
[Lustre-discuss] [wc-discuss] Bad reporting inodes free
On 28 sept. 2012, at 09:03, Alfonso Pardo wrote:> If I am need more inodes in my OST, I have a big trouble!, becouse I will need format all OST in my production storage environment. > > Any ideas to increase the inodes number in my OST without formating?The number of inodes is decided at mkfs time and can''t be changed easily. If your OSTs are on lvm, one option could be to increase the size of the logical volumes and resize the backend ldiskfs filesystems. However, i''m not sure someone has ever tried this with ldiskfs/lustre, so i would not do this on a production filesystem w/o some testing first. In any case, i think the issue here is that you seem to have a default stripe count of -1. That''s why you are limited by the number of inodes available on one single OST. You should still be able to create ~1.250M x 18 = 22.5M 1-stripe files. Cheers, Johann
Dilger, Andreas
2012-Sep-28 12:17 UTC
[Lustre-discuss] [wc-discuss] Bad reporting inodes free
On 2012-09-28, at 2:15, Johann Lombardi <johann.lombardi at linux.intel.com> wrote:> On 28 sept. 2012, at 09:03, Alfonso Pardo wrote: >> If I am need more inodes in my OST, I have a big trouble!, becouse I will need format all OST in my production storage environment. >> >> Any ideas to increase the inodes number in my OST without formating? > > The number of inodes is decided at mkfs time and can''t be changed easily. If your OSTs are on lvm, one option could be to increase the size of the logical volumes and resize the backend ldiskfs filesystems. However, i''m not sure someone has ever tried this with ldiskfs/lustre, so i would not do this on a production filesystem w/o some testing first.Right. It would be possible to use "lfs_migrate" to empty an OST, then reformat with more inodes, then migrate another OST''s files to the newly formatted OST, repeat. However, I think that would be pointless, see below.> In any case, i think the issue here is that you seem to have a default stripe count of -1. That''s why you are limited by the number of inodes available on one single OST. You should still be able to create ~1.250M x 18 = 22.5M 1-stripe files.That is my thought as well - the default file striping is too large. What is the average file size (total space used / mdt used inode count)? Having many stripes on small files (below tens of MB per stripe) is actuall bad for performance. If this is the case (many stripes on small files) then it is possible to use lfs_migrate to change the striping of the files incrementally (caveat that it is only safe for files known not to be in use). As Johann mentioned, there are still over 20M OST inodes free, and if the existing files were converted to 1-stripe files there would be about 330M OST inodes free. Cheers, Andreas