Andreas Dilger
2011-Apr-21 18:40 UTC
[Lustre-discuss] Research on filesystem metadata operation distribution
I''m trying to get some data about the relative distribution of MDS operations in the wild, and I''d be grateful if some people with production filesystems that have been running for at least a week could collect some simple stats and email them to me. They can be collected by any regular user on the MDS node: lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" It would be useful to also include "lfs df" and "lfs df -i" information, as well as a brief description of what the filesystem is used for (scratch, home, project, archive, etc). As a reminder, I''m also interested if some Lustre admins could run the "fsstats" tool from http://www.pdsi-scidac.org/fsstats/ and send me the output. Sending the output to PDSI via their submission form may also produce some positive results. http://www.pdsi-scidac.org/fsstats/files/fsstats-1.4.5.tar.gz Thanks in advance for any data. I''ve set replies to go only to lustre-devel, to avoid clogging the larger readership of lustre-discuss, but it may be useful for others to have this in a list archive and/or searchable via Google in the future so I don''t necessarily want to keep it all to myself. Cheers, Andreas -- Andreas Dilger Principal Engineer Whamcloud, Inc.
Chris Walker
2011-Apr-21 18:50 UTC
[Lustre-devel] [Lustre-discuss] Research on filesystem metadata operation distribution
/n/data (general purpose): [root at circemds1 ~]# lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" open 27110498653 samples [reqs] close 5181068572 samples [reqs] link 19102 samples [reqs] unlink 95211743 samples [reqs] rename 11919203 samples [reqs] setattr 401208317 samples [reqs] getattr 3502169172 samples [reqs] lfs df UUID 1K-blocks Used Available Use% Mounted on circelfs-MDT0000_UUID 624910300 15503992 573693816 2% /n/data[MDT:0] circelfs-OST0000_UUID 3759898504 3576773148 183098116 95% /n/data[OST:0] circelfs-OST0001_UUID 3759898504 3567047824 192849208 94% /n/data[OST:1] circelfs-OST0002_UUID 3844954168 3634309372 210564872 94% /n/data[OST:2] circelfs-OST0003_UUID 3844954168 3642678204 202274940 94% /n/data[OST:3] circelfs-OST0004_UUID 3729601424 3539940528 189634296 94% /n/data[OST:4] circelfs-OST0005_UUID 3844954168 3658277796 186643060 95% /n/data[OST:5] circelfs-OST0006_UUID 3844954168 3650323540 194589396 94% /n/data[OST:6] circelfs-OST0007_UUID 3844954168 3648381604 196541040 94% /n/data[OST:7] circelfs-OST0008_UUID 3759898504 3589631096 170173692 95% /n/data[OST:8] circelfs-OST0009_UUID 3759898504 3579220544 180625232 95% /n/data[OST:9] circelfs-OST000a_UUID 3844954168 3678558980 166373848 95% /n/data[OST:10] circelfs-OST000b_UUID 3844954168 3675247472 169659080 95% /n/data[OST:11] circelfs-OST000c_UUID 3729601424 3529929552 199628560 94% /n/data[OST:12] circelfs-OST000d_UUID 3844954168 3616859740 228051912 94% /n/data[OST:13] circelfs-OST000e_UUID 3844954168 3625217752 219699816 94% /n/data[OST:14] circelfs-OST000f_UUID 3844954168 3644184848 200720764 94% /n/data[OST:15] circelfs-OST0010_UUID 3759898504 3576388952 183459748 95% /n/data[OST:16] circelfs-OST0011_UUID 3759898504 3560942452 198889108 94% /n/data[OST:17] circelfs-OST0012_UUID 3844954168 3664210272 180717356 95% /n/data[OST:18] circelfs-OST0013_UUID 3844954168 3657736452 187188384 95% /n/data[OST:19] circelfs-OST0014_UUID 3729601424 3559121744 170414456 95% /n/data[OST:20] circelfs-OST0015_UUID 3844954168 3619354872 225542700 94% /n/data[OST:21] circelfs-OST0016_UUID 3844954168 3640468044 204446952 94% /n/data[OST:22] circelfs-OST0017_UUID 3844954168 3674764168 170127980 95% /n/data[OST:23] circelfs-OST0018_UUID 3844954168 3627092948 217828696 94% /n/data[OST:24] circelfs-OST0019_UUID 3729601424 3535664852 193935548 94% /n/data[OST:25] circelfs-OST001a_UUID 3844954168 3634647484 210305868 94% /n/data[OST:26] circelfs-OST001b_UUID 3844954168 3636805036 208106824 94% /n/data[OST:27] circelfs-OST001c_UUID 3759898504 3556374520 203433620 94% /n/data[OST:28] circelfs-OST001d_UUID 3844954168 3662286736 182643296 95% /n/data[OST:29] circelfs-OST001e_UUID 3844954168 3638805056 206123324 94% /n/data[OST:30] circelfs-OST001f_UUID 1879950588 1782719784 97187448 94% /n/data[OST:31] circelfs-OST0020_UUID 3759898504 3581449120 178387092 95% /n/data[OST:32] circelfs-OST0021_UUID 4699875208 4416545280 283272492 93% /n/data[OST:33] circelfs-OST0022_UUID 4806194208 4539916232 266253044 94% /n/data[OST:34] circelfs-OST0023_UUID 4806194208 4506930724 299234284 93% /n/data[OST:35] circelfs-OST0024_UUID 4662002272 4391493156 270508092 94% /n/data[OST:36] circelfs-OST0025_UUID 4806194208 4575711520 230427632 95% /n/data[OST:37] circelfs-OST0026_UUID 4806194208 4556692828 249458004 94% /n/data[OST:38] circelfs-OST0027_UUID 4806194208 4516547112 289628792 93% /n/data[OST:39] filesystem summary: 157169476196 148969251344 8198648572 94% /n/data lfs df -i UUID Inodes IUsed IFree IUse% Mounted on circelfs-MDT0000_UUID 178585600 159843235 18742365 89% /n/data[MDT:0] circelfs-OST0000_UUID 49665762 3884423 45781339 7% /n/data[OST:0] circelfs-OST0001_UUID 52140996 3921567 48219429 7% /n/data[OST:1] circelfs-OST0002_UUID 56777852 4116653 52661199 7% /n/data[OST:2] circelfs-OST0003_UUID 54642729 4069383 50573346 7% /n/data[OST:3] circelfs-OST0004_UUID 51319864 3904640 47415224 7% /n/data[OST:4] circelfs-OST0005_UUID 50833017 4163924 46669093 8% /n/data[OST:5] circelfs-OST0006_UUID 52922813 4265156 48657657 8% /n/data[OST:6] circelfs-OST0007_UUID 53335983 4192842 49143141 7% /n/data[OST:7] circelfs-OST0008_UUID 46382472 3815620 42566852 8% /n/data[OST:8] circelfs-OST0009_UUID 49119318 3949828 45169490 8% /n/data[OST:9] circelfs-OST000a_UUID 45567368 3968571 41598797 8% /n/data[OST:10] circelfs-OST000b_UUID 46377787 3951113 42426674 8% /n/data[OST:11] circelfs-OST000c_UUID 53972974 4055006 49917968 7% /n/data[OST:12] circelfs-OST000d_UUID 60536895 3513288 57023607 5% /n/data[OST:13] circelfs-OST000e_UUID 58664263 3729926 54934337 6% /n/data[OST:14] circelfs-OST000f_UUID 54002048 3805109 50196939 7% /n/data[OST:15] circelfs-OST0010_UUID 50010476 4133088 45877388 8% /n/data[OST:16] circelfs-OST0011_UUID 53478928 3739762 49739166 6% /n/data[OST:17] circelfs-OST0012_UUID 48965685 3779711 45185974 7% /n/data[OST:18] circelfs-OST0013_UUID 50706841 3902412 46804429 7% /n/data[OST:19] circelfs-OST0014_UUID 46622191 4002268 42619923 8% /n/data[OST:20] circelfs-OST0015_UUID 60322395 3926565 56395830 6% /n/data[OST:21] circelfs-OST0016_UUID 55176193 4054521 51121672 7% /n/data[OST:22] circelfs-OST0017_UUID 46475197 3927689 42547508 8% /n/data[OST:23] circelfs-OST0018_UUID 58494562 4023368 54471194 6% /n/data[OST:24] circelfs-OST0019_UUID 52507373 4019644 48487729 7% /n/data[OST:25] circelfs-OST001a_UUID 56467843 3889636 52578207 6% /n/data[OST:26] circelfs-OST001b_UUID 56069947 4032664 52037283 7% /n/data[OST:27] circelfs-OST001c_UUID 55008022 4127020 50881002 7% /n/data[OST:28] circelfs-OST001d_UUID 49505092 3838234 45666858 7% /n/data[OST:29] circelfs-OST001e_UUID 55683855 4146575 51537280 7% /n/data[OST:30] circelfs-OST001f_UUID 26327143 2019442 24307701 7% /n/data[OST:31] circelfs-OST0020_UUID 47956346 3344000 44612346 6% /n/data[OST:32] circelfs-OST0021_UUID 74869419 4037050 70832369 5% /n/data[OST:33] circelfs-OST0022_UUID 70831554 4287967 66543587 6% /n/data[OST:34] circelfs-OST0023_UUID 78978347 4162475 74815872 5% /n/data[OST:35] circelfs-OST0024_UUID 71879035 4250476 67628559 5% /n/data[OST:36] circelfs-OST0025_UUID 61820346 4199521 57620825 6% /n/data[OST:37] circelfs-OST0026_UUID 66674379 4299034 62375345 6% /n/data[OST:38] circelfs-OST0027_UUID 76668558 4256622 72411936 5% /n/data[OST:39] filesystem summary: 178585600 159843235 18742365 89% /n/data /n/panlfs (general purpose): [root at panmds ~]# lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" open 55439389107 samples [reqs] close 5873727879 samples [reqs] link 1855475 samples [reqs] unlink 127524184 samples [reqs] rename 8877227 samples [reqs] getxattr 26584744 samples [reqs] setattr 171354158 samples [reqs] getattr 3188738225 samples [reqs] [root at panmds ~]# lfs df UUID 1K-blocks Used Available Use% Mounted on panlfs-MDT0000_UUID 1181496752 33912532 1080064732 2% /n/panlfs[MDT:0] panlfs-OST0000_UUID 5765436120 4974923264 790512780 86% /n/panlfs[OST:0] panlfs-OST0001_UUID 5765436120 4967568572 797754164 86% /n/panlfs[OST:1] panlfs-OST0002_UUID 5765436120 4965892684 799366576 86% /n/panlfs[OST:2] panlfs-OST0003_UUID 5765436120 4965261380 800015864 86% /n/panlfs[OST:3] panlfs-OST0004_UUID 5765436120 4963104608 802331336 86% /n/panlfs[OST:4] panlfs-OST0005_UUID 5765436120 5014836692 750577924 86% /n/panlfs[OST:5] panlfs-OST0006_UUID 5765436120 4980876888 784558272 86% /n/panlfs[OST:6] panlfs-OST0007_UUID 5765436120 4991355684 774080308 86% /n/panlfs[OST:7] panlfs-OST0008_UUID 5765436120 4959617256 805687812 86% /n/panlfs[OST:8] panlfs-OST0009_UUID 5765436120 4961260628 804175148 86% /n/panlfs[OST:9] panlfs-OST000a_UUID 5765436120 4964872652 800558348 86% /n/panlfs[OST:10] panlfs-OST000b_UUID 5765436120 4961539444 803894612 86% /n/panlfs[OST:11] panlfs-OST000c_UUID 5765436120 4968309644 796943316 86% /n/panlfs[OST:12] panlfs-OST000d_UUID 5765436120 4966318752 799117364 86% /n/panlfs[OST:13] panlfs-OST000e_UUID 5765436120 4971024728 794309804 86% /n/panlfs[OST:14] panlfs-OST000f_UUID 5765436120 4964518012 800767592 86% /n/panlfs[OST:15] panlfs-OST0010_UUID 5765436120 4971079772 794356052 86% /n/panlfs[OST:16] panlfs-OST0011_UUID 5765436120 4947691048 817743996 85% /n/panlfs[OST:17] panlfs-OST0012_UUID 5765436120 4782610928 982801640 82% /n/panlfs[OST:18] panlfs-OST0013_UUID 5765436120 4791320796 974109108 83% /n/panlfs[OST:19] panlfs-OST0014_UUID 5765436120 4788726168 976685372 83% /n/panlfs[OST:20] panlfs-OST0015_UUID 5765436120 4780325788 985098044 82% /n/panlfs[OST:21] panlfs-OST0016_UUID 5765436120 4831302484 934133556 83% /n/panlfs[OST:22] panlfs-OST0017_UUID 5765436120 4789102248 976324656 83% /n/panlfs[OST:23] panlfs-OST0018_UUID 5765436120 4784434788 980982900 82% /n/panlfs[OST:24] panlfs-OST0019_UUID 5765436120 4786139740 979295356 83% /n/panlfs[OST:25] panlfs-OST001a_UUID 5765436120 4784068940 981247400 82% /n/panlfs[OST:26] panlfs-OST001b_UUID 5765436120 4781602996 983714956 82% /n/panlfs[OST:27] panlfs-OST001c_UUID 5765436120 4781088948 984338980 82% /n/panlfs[OST:28] panlfs-OST001d_UUID 5765436120 4780381632 985054484 82% /n/panlfs[OST:29] panlfs-OST001e_UUID 5766984312 4069861960 1696963576 70% /n/panlfs[OST:30] panlfs-OST001f_UUID 5766984312 3908778724 1858056516 67% /n/panlfs[OST:31] panlfs-OST0020_UUID 5766984312 3916492512 1850483608 67% /n/panlfs[OST:32] panlfs-OST0021_UUID 5766984312 4015278136 1751682736 69% /n/panlfs[OST:33] panlfs-OST0022_UUID 5766984312 4039640892 1727343380 70% /n/panlfs[OST:34] panlfs-OST0023_UUID 5766984312 4032875720 1734085616 69% /n/panlfs[OST:35] panlfs-OST0024_UUID 7689310636 5379578792 2309710912 69% /n/panlfs[OST:36] panlfs-OST0025_UUID 7689310636 5368547620 2320760968 69% /n/panlfs[OST:37] panlfs-OST0026_UUID 5766984312 4031962692 1735021096 69% /n/panlfs[OST:38] panlfs-OST0027_UUID 5766984312 4064421580 1702550384 70% /n/panlfs[OST:39] panlfs-OST0028_UUID 5766984312 4017407756 1749576536 69% /n/panlfs[OST:40] panlfs-OST0029_UUID 5766984312 4088980072 1678004236 70% /n/panlfs[OST:41] panlfs-OST002a_UUID 5766984312 4037071336 1729901712 70% /n/panlfs[OST:42] panlfs-OST002b_UUID 5766984312 4031950200 1735034096 69% /n/panlfs[OST:43] panlfs-OST002c_UUID 7689310636 5430528320 2258779760 70% /n/panlfs[OST:44] panlfs-OST002d_UUID 7689310636 5311913940 2377396612 69% /n/panlfs[OST:45] panlfs-OST002e_UUID 5766984312 4025550116 1741432148 69% /n/panlfs[OST:46] panlfs-OST002f_UUID 5766984312 3953045972 1813937256 68% /n/panlfs[OST:47] panlfs-OST0030_UUID 5766984312 4208388240 1558493060 72% /n/panlfs[OST:48] panlfs-OST0031_UUID 5766984312 4011293516 1755680556 69% /n/panlfs[OST:49] panlfs-OST0032_UUID 5766984312 3926746328 1840231656 68% /n/panlfs[OST:50] panlfs-OST0033_UUID 5766984312 4027338528 1739639084 69% /n/panlfs[OST:51] panlfs-OST0034_UUID 7689310636 5377549996 2311737664 69% /n/panlfs[OST:52] panlfs-OST0035_UUID 7689310636 5372207348 2317103220 69% /n/panlfs[OST:53] panlfs-OST0036_UUID 5766984312 3927872888 1839107328 68% /n/panlfs[OST:54] panlfs-OST0037_UUID 5766984312 4023713564 1743270716 69% /n/panlfs[OST:55] panlfs-OST0038_UUID 5766984312 4021830276 1745148900 69% /n/panlfs[OST:56] panlfs-OST0039_UUID 5766984312 3934077132 1832886248 68% /n/panlfs[OST:57] panlfs-OST003a_UUID 5766984312 4033701168 1733152684 69% /n/panlfs[OST:58] panlfs-OST003b_UUID 5766984312 4023597696 1743383540 69% /n/panlfs[OST:59] panlfs-OST003c_UUID 7689310636 5376225432 2313034004 69% /n/panlfs[OST:60] panlfs-OST003d_UUID 7689310636 5382523640 2306769476 70% /n/panlfs[OST:61] filesystem summary: 372885192176 286292109256 86590897004 76% /n/panlfs lfs df -i UUID Inodes IUsed IFree IUse% Mounted on panlfs-MDT0000_UUID 337608704 100870063 236738641 29% /n/panlfs[MDT:0] panlfs-OST0000_UUID 199061357 1428948 197632409 0% /n/panlfs[OST:0] panlfs-OST0001_UUID 200841380 1433128 199408252 0% /n/panlfs[OST:1] panlfs-OST0002_UUID 201279904 1435696 199844208 0% /n/panlfs[OST:2] panlfs-OST0003_UUID 201426659 1442402 199984257 0% /n/panlfs[OST:3] panlfs-OST0004_UUID 202009708 1439610 200570098 0% /n/panlfs[OST:4] panlfs-OST0005_UUID 189068210 1433302 187634908 0% /n/panlfs[OST:5] panlfs-OST0006_UUID 197500278 1430549 196069729 0% /n/panlfs[OST:6] panlfs-OST0007_UUID 194897600 1412501 193485099 0% /n/panlfs[OST:7] panlfs-OST0008_UUID 202853017 1444880 201408137 0% /n/panlfs[OST:8] panlfs-OST0009_UUID 202450065 1442617 201007448 0% /n/panlfs[OST:9] panlfs-OST000a_UUID 201533261 1441066 200092195 0% /n/panlfs[OST:10] panlfs-OST000b_UUID 202347095 1441975 200905120 0% /n/panlfs[OST:11] panlfs-OST000c_UUID 200661982 1429801 199232181 0% /n/panlfs[OST:12] panlfs-OST000d_UUID 201207975 1424647 199783328 0% /n/panlfs[OST:13] panlfs-OST000e_UUID 200013486 1435930 198577556 0% /n/panlfs[OST:14] panlfs-OST000f_UUID 201655294 1437041 200218253 0% /n/panlfs[OST:15] panlfs-OST0010_UUID 199977755 1437687 198540068 0% /n/panlfs[OST:16] panlfs-OST0011_UUID 205792546 1441571 204350975 0% /n/panlfs[OST:17] panlfs-OST0012_UUID 247183952 1510162 245673790 0% /n/panlfs[OST:18] panlfs-OST0013_UUID 244975158 1502607 243472551 0% /n/panlfs[OST:19] panlfs-OST0014_UUID 245626362 1517839 244108523 0% /n/panlfs[OST:20] panlfs-OST0015_UUID 247752267 1514148 246238119 0% /n/panlfs[OST:21] panlfs-OST0016_UUID 235012597 1514167 233498430 0% /n/panlfs[OST:22] panlfs-OST0017_UUID 245566479 1523072 244043407 0% /n/panlfs[OST:23] panlfs-OST0018_UUID 246728144 1511148 245216996 0% /n/panlfs[OST:24] panlfs-OST0019_UUID 246287058 1508750 244778308 0% /n/panlfs[OST:25] panlfs-OST001a_UUID 246828538 1519283 245309255 0% /n/panlfs[OST:26] panlfs-OST001b_UUID 247425369 1517824 245907545 0% /n/panlfs[OST:27] panlfs-OST001c_UUID 247570951 1520608 246050343 0% /n/panlfs[OST:28] panlfs-OST001d_UUID 247744143 1521256 246222887 0% /n/panlfs[OST:29] panlfs-OST001e_UUID 366182400 1402921 364779479 0% /n/panlfs[OST:30] panlfs-OST001f_UUID 366182400 1413243 364769157 0% /n/panlfs[OST:31] panlfs-OST0020_UUID 366182400 1407899 364774501 0% /n/panlfs[OST:32] panlfs-OST0021_UUID 366182400 1449305 364733095 0% /n/panlfs[OST:33] panlfs-OST0022_UUID 366182400 1435498 364746902 0% /n/panlfs[OST:34] panlfs-OST0023_UUID 366182400 1446112 364736288 0% /n/panlfs[OST:35] panlfs-OST0024_UUID 488243200 1924003 486319197 0% /n/panlfs[OST:36] panlfs-OST0025_UUID 488243200 1932387 486310813 0% /n/panlfs[OST:37] panlfs-OST0026_UUID 366182400 1440911 364741489 0% /n/panlfs[OST:38] panlfs-OST0027_UUID 366182400 1444227 364738173 0% /n/panlfs[OST:39] panlfs-OST0028_UUID 366182400 1446877 364735523 0% /n/panlfs[OST:40] panlfs-OST0029_UUID 366182400 1433436 364748964 0% /n/panlfs[OST:41] panlfs-OST002a_UUID 366182400 1446075 364736325 0% /n/panlfs[OST:42] panlfs-OST002b_UUID 366182400 1444201 364738199 0% /n/panlfs[OST:43] panlfs-OST002c_UUID 488243200 1928893 486314307 0% /n/panlfs[OST:44] panlfs-OST002d_UUID 488243200 1884929 486358271 0% /n/panlfs[OST:45] panlfs-OST002e_UUID 366182400 1445724 364736676 0% /n/panlfs[OST:46] panlfs-OST002f_UUID 366182400 1405979 364776421 0% /n/panlfs[OST:47] panlfs-OST0030_UUID 366182400 1307695 364874705 0% /n/panlfs[OST:48] panlfs-OST0031_UUID 366182400 1449966 364732434 0% /n/panlfs[OST:49] panlfs-OST0032_UUID 366182400 1409123 364773277 0% /n/panlfs[OST:50] panlfs-OST0033_UUID 366182400 1447543 364734857 0% /n/panlfs[OST:51] panlfs-OST0034_UUID 488243200 1924754 486318446 0% /n/panlfs[OST:52] panlfs-OST0035_UUID 488243200 1924491 486318709 0% /n/panlfs[OST:53] panlfs-OST0036_UUID 366182400 1402421 364779979 0% /n/panlfs[OST:54] panlfs-OST0037_UUID 366182400 1445228 364737172 0% /n/panlfs[OST:55] panlfs-OST0038_UUID 366182400 1445448 364736952 0% /n/panlfs[OST:56] panlfs-OST0039_UUID 366182400 1409186 364773214 0% /n/panlfs[OST:57] panlfs-OST003a_UUID 366182400 1443444 364738956 0% /n/panlfs[OST:58] panlfs-OST003b_UUID 366182400 1445485 364736915 0% /n/panlfs[OST:59] panlfs-OST003c_UUID 488243200 1925336 486317864 0% /n/panlfs[OST:60] panlfs-OST003d_UUID 488243200 1920467 486322733 0% /n/panlfs[OST:61] filesystem summary: 337608704 100870063 236738641 29% /n/panlfs On 4/21/11 2:40 PM, Andreas Dilger wrote:> I''m trying to get some data about the relative distribution of MDS operations in the wild, and I''d be grateful if some people with production filesystems that have been running for at least a week could collect some simple stats and email them to me. They can be collected by any regular user on the MDS node: > > lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" > > It would be useful to also include "lfs df" and "lfs df -i" information, as well as a brief description of what the filesystem is used for (scratch, home, project, archive, etc). > > > > As a reminder, I''m also interested if some Lustre admins could run the "fsstats" tool from http://www.pdsi-scidac.org/fsstats/ and send me the output. Sending the output to PDSI via their submission form may also produce some positive results. > > http://www.pdsi-scidac.org/fsstats/files/fsstats-1.4.5.tar.gz > > > Thanks in advance for any data. I''ve set replies to go only to lustre-devel, to avoid clogging the larger readership of lustre-discuss, but it may be useful for others to have this in a list archive and/or searchable via Google in the future so I don''t necessarily want to keep it all to myself. > > Cheers, Andreas > -- > Andreas Dilger > Principal Engineer > Whamcloud, Inc. > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss
Mark Nelson
2011-Apr-21 21:47 UTC
[Lustre-devel] [Lustre-discuss] Research on filesystem metadata operation distribution
Hi Andreas, Here are the statistics for our current lustre storage as requested. This filesystem is used for scratch on a ~8700 core cluster.> [root at mds ~]# lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" > open 52365106 samples [reqs] > close 26309366 samples [reqs] > link 164 samples [reqs] > unlink 4199061 samples [reqs] > rename 605499 samples [reqs] > getattr 204467294 samples [reqs] > setattr 6947314 samples [reqs] > getxattr 1211618173 samples [reqs] > sync 2399 samples [reqs]> mark at node1083:~> lfs df > UUID 1K-blocks Used Available Use% Mounted on > lustre-MDT0000_UUID 2446151592 7290552 2299069836 0% /scratch1[MDT:0] > lustre-OST0000_UUID 2850686364 2260170480 445708680 79% /scratch1[OST:0] > lustre-OST0001_UUID 2850686364 2360436928 345415236 82% /scratch1[OST:1] > lustre-OST0002_UUID 2850686364 2301852200 404026800 80% /scratch1[OST:2] > lustre-OST0003_UUID 2850686364 2297681284 408197928 80% /scratch1[OST:3] > lustre-OST0004_UUID 2850686364 2268630672 437220700 79% /scratch1[OST:4] > lustre-OST0005_UUID 2850686364 2352652832 353226340 82% /scratch1[OST:5] > lustre-OST0006_UUID 2850686364 2256628780 449250944 79% /scratch1[OST:6] > lustre-OST0007_UUID 2850686364 2239905176 465895280 78% /scratch1[OST:7] > lustre-OST0008_UUID 2850686364 2256707044 449145120 79% /scratch1[OST:8] > lustre-OST0009_UUID 2850686364 2318356564 387467160 81% /scratch1[OST:9] > lustre-OST000a_UUID 2850686364 2246170292 459654224 78% /scratch1[OST:10] > lustre-OST000b_UUID 2850686364 2289467904 416357844 80% /scratch1[OST:11] > lustre-OST000c_UUID 2850686364 2267773568 438077736 79% /scratch1[OST:12] > lustre-OST000d_UUID 2850686364 2370072892 335806104 83% /scratch1[OST:13] > lustre-OST000e_UUID 2850686364 2373615788 332262332 83% /scratch1[OST:14] > lustre-OST000f_UUID 2850686364 2268974064 436784684 79% /scratch1[OST:15] > lustre-OST0010_UUID 2850686364 2254083524 451795496 79% /scratch1[OST:16] > lustre-OST0011_UUID 2850686364 2305220652 400636828 80% /scratch1[OST:17] > lustre-OST0012_UUID 2850686364 2289688148 416189404 80% /scratch1[OST:18] > lustre-OST0013_UUID 2850686364 2248032400 457846596 78% /scratch1[OST:19] > lustre-OST0014_UUID 2850686364 2348553416 357297956 82% /scratch1[OST:20] > lustre-OST0015_UUID 2850686364 2336745344 369133676 81% /scratch1[OST:21] > lustre-OST0016_UUID 2850686364 2286710092 419095976 80% /scratch1[OST:22] > lustre-OST0017_UUID 2850686364 2258930636 446948364 79% /scratch1[OST:23] > lustre-OST0018_UUID 2850686364 2394833760 311045260 84% /scratch1[OST:24] > lustre-OST0019_UUID 2850686364 2315306792 390571460 81% /scratch1[OST:25] > lustre-OST001a_UUID 2850686364 2300589812 405289220 80% /scratch1[OST:26] > lustre-OST001b_UUID 2850686364 2296248192 409542792 80% /scratch1[OST:27] > lustre-OST001c_UUID 2850686364 2306921556 398957656 80% /scratch1[OST:28] > lustre-OST001d_UUID 2850686364 2277275716 428541804 79% /scratch1[OST:29] > lustre-OST001e_UUID 2850686364 2290738376 415141384 80% /scratch1[OST:30] > lustre-OST001f_UUID 2850686364 2358853432 347025588 82% /scratch1[OST:31] > lustre-OST0020_UUID 2850686364 2164837628 541041392 75% /scratch1[OST:32] > lustre-OST0021_UUID 2850686364 2250094264 455757108 78% /scratch1[OST:33] > lustre-OST0022_UUID 2850686364 2318377076 387474296 81% /scratch1[OST:34] > lustre-OST0023_UUID 2850686364 2287996640 417827084 80% /scratch1[OST:35] > lustre-OST0024_UUID 2850686364 2266655604 439213268 79% /scratch1[OST:36] > lustre-OST0025_UUID 2850686364 2196015760 509804940 77% /scratch1[OST:37] > lustre-OST0026_UUID 2850686364 2326710900 379141252 81% /scratch1[OST:38] > lustre-OST0027_UUID 2850686364 2263182244 442696788 79% /scratch1[OST:39] > lustre-OST0028_UUID 2850686364 2281411520 424466108 80% /scratch1[OST:40] > lustre-OST0029_UUID 2850686364 2253024788 452854156 79% /scratch1[OST:41] > lustre-OST002a_UUID 2850686364 2265263264 440614844 79% /scratch1[OST:42] > lustre-OST002b_UUID 2850686364 2358109672 347769300 82% /scratch1[OST:43] > lustre-OST002c_UUID 2850686364 2246612120 459253300 78% /scratch1[OST:44] > lustre-OST002d_UUID 2850686364 2315081464 390671376 81% /scratch1[OST:45] > lustre-OST002e_UUID 2850686364 2151456488 554423308 75% /scratch1[OST:46] > lustre-OST002f_UUID 2850686364 2276389044 429401272 79% /scratch1[OST:47] > > filesystem summary: 136832945472 109819046792 20061966364 80% /scratch1> mark at node1083:~> lfs df -i > UUID Inodes IUsed IFree IUse% Mounted on > lustre-MDT0000_UUID 623065492 13350254 609715238 2% /scratch1[MDT:0] > lustre-OST0000_UUID 147951373 356246 147595127 0% /scratch1[OST:0] > lustre-OST0001_UUID 122874520 348299 122526221 0% /scratch1[OST:1] > lustre-OST0002_UUID 137525462 310049 137215413 0% /scratch1[OST:2] > lustre-OST0003_UUID 138550809 340738 138210071 0% /scratch1[OST:3] > lustre-OST0004_UUID 145867558 363872 145503686 0% /scratch1[OST:4] > lustre-OST0005_UUID 124846336 353476 124492860 0% /scratch1[OST:5] > lustre-OST0006_UUID 148769303 334806 148434497 0% /scratch1[OST:6] > lustre-OST0007_UUID 153043111 338572 152704539 0% /scratch1[OST:7] > lustre-OST0008_UUID 148836688 343526 148493162 0% /scratch1[OST:8] > lustre-OST0009_UUID 133406354 332875 133073479 0% /scratch1[OST:9] > lustre-OST000a_UUID 151428889 318410 151110479 0% /scratch1[OST:10] > lustre-OST000b_UUID 140643612 370513 140273099 0% /scratch1[OST:11] > lustre-OST000c_UUID 146024187 317395 145706792 0% /scratch1[OST:12] > lustre-OST000d_UUID 120508125 334975 120173150 0% /scratch1[OST:13] > lustre-OST000e_UUID 119591403 335426 119255977 0% /scratch1[OST:14] > lustre-OST000f_UUID 145755615 354062 145401553 0% /scratch1[OST:15] > lustre-OST0010_UUID 149476374 334318 149142056 0% /scratch1[OST:16] > lustre-OST0011_UUID 136596942 304167 136292775 0% /scratch1[OST:17] > lustre-OST0012_UUID 140548883 326959 140221924 0% /scratch1[OST:18] > lustre-OST0013_UUID 150984278 329164 150655114 0% /scratch1[OST:19] > lustre-OST0014_UUID 125850214 335954 125514260 0% /scratch1[OST:20] > lustre-OST0015_UUID 128803314 340503 128462811 0% /scratch1[OST:21] > lustre-OST0016_UUID 141332100 356012 140976088 0% /scratch1[OST:22] > lustre-OST0017_UUID 148282098 332665 147949433 0% /scratch1[OST:23] > lustre-OST0018_UUID 114305646 341206 113964440 0% /scratch1[OST:24] > lustre-OST0019_UUID 134162367 320108 133842259 0% /scratch1[OST:25] > lustre-OST001a_UUID 137846405 346482 137499923 0% /scratch1[OST:26] > lustre-OST001b_UUID 138954376 354556 138599820 0% /scratch1[OST:27] > lustre-OST001c_UUID 136261269 314506 135946763 0% /scratch1[OST:28] > lustre-OST001d_UUID 143665519 346163 143319356 0% /scratch1[OST:29] > lustre-OST001e_UUID 140292861 310260 139982601 0% /scratch1[OST:30] > lustre-OST001f_UUID 123343915 374877 122969038 0% /scratch1[OST:31] > lustre-OST0020_UUID 171817157 349887 171467270 0% /scratch1[OST:32] > lustre-OST0021_UUID 150499898 352977 150146921 0% /scratch1[OST:33] > lustre-OST0022_UUID 133410450 335784 133074666 0% /scratch1[OST:34] > lustre-OST0023_UUID 140965613 325669 140639944 0% /scratch1[OST:35] > lustre-OST0024_UUID 146302421 339776 145962645 0% /scratch1[OST:36] > lustre-OST0025_UUID 163987206 343710 163643496 0% /scratch1[OST:37] > lustre-OST0026_UUID 131322985 361117 130961868 0% /scratch1[OST:38] > lustre-OST0027_UUID 147155706 296609 146859097 0% /scratch1[OST:39] > lustre-OST0028_UUID 142674349 344009 142330340 0% /scratch1[OST:40] > lustre-OST0029_UUID 149723181 332111 149391070 0% /scratch1[OST:41] > lustre-OST002a_UUID 146671256 320469 146350787 0% /scratch1[OST:42] > lustre-OST002b_UUID 123435148 350587 123084561 0% /scratch1[OST:43] > lustre-OST002c_UUID 151317772 343224 150974548 0% /scratch1[OST:44] > lustre-OST002d_UUID 134212577 353558 133859019 0% /scratch1[OST:45] > lustre-OST002e_UUID 175115293 326239 174789054 0% /scratch1[OST:46] > lustre-OST002f_UUID 143864174 326506 143537668 0% /scratch1[OST:47] > > filesystem summary: 623065492 13350254 609715238 2% /scratch1-- Mark Nelson, HPC Systems Administrator Minnesota Supercomputing Institute Phone: (612)626-4479 Email: mark at msi.umn.edu
Felix, Evan J
2011-Apr-22 14:55 UTC
[Lustre-devel] Research on filesystem metadata operation distribution
Info on our current large cluster: /mscf on Chinook: 20TB - mostly home directories and application binaries/libraries. open 4554527798 samples [reqs] close 473872913 samples [reqs] link 681239 samples [reqs] unlink 24285832 samples [reqs] rename 3369767 samples [reqs] getxattr 3006968011 samples [reqs] setxattr 995589 samples [reqs] setattr 30371813 samples [reqs] getattr 2067961374 samples [reqs] /dtemp on Chinook: 270TB - user data for processing and output. open 946342778 samples [reqs] close 136642017 samples [reqs] link 1601 samples [reqs] unlink 18105075 samples [reqs] rename 6232018 samples [reqs] getxattr 81475 samples [reqs] setxattr 131891 samples [reqs] setattr 34112680 samples [reqs] getattr 1095110611 samples [reqs] df info: UUID 1K-blocks Used Available Use% Mounted on sfs-MDT0000_UUID 1834869856 4327988 1725684272 0% /dtemp[MDT:0] sfs-OST0000_UUID 2064245920 1068818228 984870244 51% /dtemp[OST:0] sfs-OST0001_UUID 2064245920 1147446156 906245256 55% /dtemp[OST:1] sfs-OST0002_UUID 2064245920 1113427560 940268856 53% /dtemp[OST:2] sfs-OST0003_UUID 2064245920 1159010028 894677244 56% /dtemp[OST:3] sfs-OST0004_UUID 2064245920 1147221340 906467384 55% /dtemp[OST:4] sfs-OST0005_UUID 2064245920 1095167780 958527096 53% /dtemp[OST:5] sfs-OST0006_UUID 2064245920 1063356308 990322272 51% /dtemp[OST:6] sfs-OST0007_UUID 2064245920 1064588100 989099968 51% /dtemp[OST:7] sfs-OST0008_UUID 2064245920 1190708224 862988936 57% /dtemp[OST:8] sfs-OST0009_UUID 2064245920 1112665488 941024380 53% /dtemp[OST:9] sfs-OST000a_UUID 2064245920 945953764 1107727176 45% /dtemp[OST:10] sfs-OST000b_UUID 2064245920 1224351324 829342584 59% /dtemp[OST:11] sfs-OST000c_UUID 2064245920 1080636532 973049116 52% /dtemp[OST:12] sfs-OST000d_UUID 2064245920 1017028208 1036655292 49% /dtemp[OST:13] sfs-OST000e_UUID 2064245920 1028559680 1025142148 49% /dtemp[OST:14] sfs-OST000f_UUID 2064245920 1022769156 1030897212 49% /dtemp[OST:15] sfs-OST0010_UUID 2064245920 1100019724 953687264 53% /dtemp[OST:16] sfs-OST0011_UUID 2064245920 1052428920 1001264028 50% /dtemp[OST:17] sfs-OST0012_UUID 2064245920 1013339316 1040330552 49% /dtemp[OST:18] sfs-OST0013_UUID 2064245920 1080871180 972806700 52% /dtemp[OST:19] sfs-OST0014_UUID 2064245920 987840500 1065852152 47% /dtemp[OST:20] sfs-OST0015_UUID 2064245920 1103825556 949866500 53% /dtemp[OST:21] sfs-OST0016_UUID 2064245920 1092751864 960926824 52% /dtemp[OST:22] sfs-OST0017_UUID 2064245920 1076291320 977379232 52% /dtemp[OST:23] sfs-OST0018_UUID 2064245920 1138079040 915597952 55% /dtemp[OST:24] sfs-OST0019_UUID 2064245920 1135952920 917739652 55% /dtemp[OST:25] sfs-OST001a_UUID 2064245920 1205589392 848111684 58% /dtemp[OST:26] sfs-OST001b_UUID 2064245920 1073418548 980271440 52% /dtemp[OST:27] sfs-OST001c_UUID 2064245920 1011666464 1042017224 49% /dtemp[OST:28] sfs-OST001d_UUID 2064245920 952649808 1101040760 46% /dtemp[OST:29] sfs-OST001e_UUID 2064245920 976172548 1077586588 47% /dtemp[OST:30] sfs-OST001f_UUID 2064245920 986284680 1067396376 47% /dtemp[OST:31] sfs-OST0020_UUID 2064245920 1154685332 899011516 55% /dtemp[OST:32] sfs-OST0021_UUID 2064245920 1145045268 908638940 55% /dtemp[OST:33] sfs-OST0022_UUID 2064245920 1002669208 1051017476 48% /dtemp[OST:34] sfs-OST0023_UUID 2064245920 1196066908 857608280 57% /dtemp[OST:35] sfs-OST0024_UUID 2064245920 1099025172 954671316 53% /dtemp[OST:36] sfs-OST0025_UUID 2064245920 1056398932 997293372 51% /dtemp[OST:37] sfs-OST0026_UUID 2064245920 1178277020 875422164 57% /dtemp[OST:38] sfs-OST0027_UUID 2064245920 1204857620 848839936 58% /dtemp[OST:39] sfs-OST0028_UUID 2064245920 1040356188 1013403316 50% /dtemp[OST:40] sfs-OST0029_UUID 2064245920 1217397640 836297196 58% /dtemp[OST:41] sfs-OST002a_UUID 2064245920 1126382584 927322856 54% /dtemp[OST:42] sfs-OST002b_UUID 2064245920 1156904268 896788008 56% /dtemp[OST:43] sfs-OST002c_UUID 2064245920 1175057720 878646580 56% /dtemp[OST:44] sfs-OST002d_UUID 2064245920 1203375028 850316144 58% /dtemp[OST:45] sfs-OST002e_UUID 2064245920 1112189444 941503504 53% /dtemp[OST:46] sfs-OST002f_UUID 2064245920 1087356908 966328428 52% /dtemp[OST:47] sfs-OST0030_UUID 2064245920 1211991352 841713660 58% /dtemp[OST:48] sfs-OST0031_UUID 2064245920 990200212 969120668 47% /dtemp[OST:49] sfs-OST0032_UUID 2064245920 1069921844 889408180 51% /dtemp[OST:50] sfs-OST0033_UUID 2064245920 1009058432 950276708 48% /dtemp[OST:51] sfs-OST0034_UUID 2064245920 1165364648 793969240 56% /dtemp[OST:52] sfs-OST0035_UUID 2064245920 1205991652 847704532 58% /dtemp[OST:53] sfs-OST0036_UUID 2064245920 1013856480 1039833748 49% /dtemp[OST:54] sfs-OST0037_UUID 2064245920 1071568160 982125316 51% /dtemp[OST:55] sfs-OST0038_UUID 2064245920 1067675792 986000280 51% /dtemp[OST:56] sfs-OST0039_UUID 2064245920 1053890700 999813836 51% /dtemp[OST:57] sfs-OST003a_UUID 2064245920 1135658592 918042728 55% /dtemp[OST:58] sfs-OST003b_UUID 2064245920 1218140260 835564952 59% /dtemp[OST:59] sfs-OST003c_UUID 2064245920 1035919232 1017757280 50% /dtemp[OST:60] sfs-OST003d_UUID 2064245920 1016449632 1037243904 49% /dtemp[OST:61] sfs-OST003e_UUID 2064245920 1128825020 924881152 54% /dtemp[OST:62] sfs-OST003f_UUID 2064245920 1107880660 945794748 53% /dtemp[OST:63] sfs-OST0040_UUID 2064245920 1185860764 867836796 57% /dtemp[OST:64] sfs-OST0041_UUID 2064245920 1080292868 973412016 52% /dtemp[OST:65] sfs-OST0042_UUID 2064245920 1112667788 941016688 53% /dtemp[OST:66] sfs-OST0043_UUID 2064245920 1104678368 949023060 53% /dtemp[OST:67] sfs-OST0044_UUID 2064245920 1106512800 947174584 53% /dtemp[OST:68] sfs-OST0045_UUID 2064245920 1075918920 977765432 52% /dtemp[OST:69] sfs-OST0046_UUID 2064245920 1180045252 873641736 57% /dtemp[OST:70] sfs-OST0047_UUID 2064245920 1116326384 937355352 54% /dtemp[OST:71] sfs-OST0048_UUID 2064245920 1206908764 846776016 58% /dtemp[OST:72] sfs-OST0049_UUID 2064245920 1080584572 973118472 52% /dtemp[OST:73] sfs-OST004a_UUID 2064245920 1134900624 918780276 54% /dtemp[OST:74] sfs-OST004b_UUID 2064245920 1051051796 1002653240 50% /dtemp[OST:75] sfs-OST004c_UUID 2064245920 999055640 1054628328 48% /dtemp[OST:76] sfs-OST004d_UUID 2064245920 1039090536 1014603492 50% /dtemp[OST:77] sfs-OST004e_UUID 2064245920 1096989904 956687368 53% /dtemp[OST:78] sfs-OST004f_UUID 2064245920 1283380656 770315848 62% /dtemp[OST:79] sfs-OST0050_UUID 2064245920 1076040124 977642024 52% /dtemp[OST:80] sfs-OST0051_UUID 2064245920 984345000 1069336180 47% /dtemp[OST:81] sfs-OST0052_UUID 2064245920 1013599600 1040091552 49% /dtemp[OST:82] sfs-OST0053_UUID 2064245920 1160257412 893434280 56% /dtemp[OST:83] sfs-OST0054_UUID 2064245920 1155219336 898480408 55% /dtemp[OST:84] sfs-OST0055_UUID 2064245920 1123087368 930604308 54% /dtemp[OST:85] sfs-OST0056_UUID 2064245920 1183220488 870473776 57% /dtemp[OST:86] sfs-OST0057_UUID 2064245920 1017526048 1036152256 49% /dtemp[OST:87] sfs-OST0058_UUID 2064245920 1154269816 899422772 55% /dtemp[OST:88] sfs-OST0059_UUID 2064245920 1195696256 858001948 57% /dtemp[OST:89] sfs-OST005a_UUID 2064245920 1193575944 860118532 57% /dtemp[OST:90] sfs-OST005b_UUID 2064245920 1085868548 967818624 52% /dtemp[OST:91] sfs-OST005c_UUID 2064245920 1044960288 1008718892 50% /dtemp[OST:92] sfs-OST005d_UUID 2064245920 1082544472 971135872 52% /dtemp[OST:93] sfs-OST005e_UUID 2064245920 1075745924 977952724 52% /dtemp[OST:94] sfs-OST005f_UUID 2064245920 1054877620 998818084 51% /dtemp[OST:95] sfs-OST0060_UUID 2064245920 1118777916 934912992 54% /dtemp[OST:96] sfs-OST0061_UUID 2064245920 1067545756 986139120 51% /dtemp[OST:97] sfs-OST0062_UUID 2064245920 1023269860 1030489276 49% /dtemp[OST:98] sfs-OST0063_UUID 2064245920 1280644316 773067836 62% /dtemp[OST:99] sfs-OST0064_UUID 2064245920 1097805864 955887200 53% /dtemp[OST:100] sfs-OST0065_UUID 2064245920 1021619552 1032058604 49% /dtemp[OST:101] sfs-OST0066_UUID 2064245920 1057169164 996534968 51% /dtemp[OST:102] sfs-OST0067_UUID 2064245920 1146992012 906681344 55% /dtemp[OST:103] sfs-OST0068_UUID 2064245920 1026243604 1027447080 49% /dtemp[OST:104] sfs-OST0069_UUID 2064245920 1004975972 1048712764 48% /dtemp[OST:105] sfs-OST006a_UUID 2064245920 1084540164 969157304 52% /dtemp[OST:106] sfs-OST006b_UUID 2064245920 1196168044 857520712 57% /dtemp[OST:107] sfs-OST006c_UUID 2064245920 1061778752 991903068 51% /dtemp[OST:108] sfs-OST006d_UUID 2064245920 1027008356 1026677916 49% /dtemp[OST:109] sfs-OST006e_UUID 2064245920 1039766492 1013915812 50% /dtemp[OST:110] sfs-OST006f_UUID 2064245920 1145531072 908160924 55% /dtemp[OST:111] sfs-OST0070_UUID 2064245920 1073432604 980256548 52% /dtemp[OST:112] sfs-OST0071_UUID 2064245920 1040646388 1013052688 50% /dtemp[OST:113] sfs-OST0072_UUID 2064245920 1080501192 973182680 52% /dtemp[OST:114] sfs-OST0073_UUID 2064245920 1130126556 923578552 54% /dtemp[OST:115] sfs-OST0074_UUID 2064245920 1219314512 834380732 59% /dtemp[OST:116] sfs-OST0075_UUID 2064245920 1099077204 954603912 53% /dtemp[OST:117] sfs-OST0076_UUID 2064245920 1172247528 881433476 56% /dtemp[OST:118] sfs-OST0077_UUID 2064245920 1029514300 1024132536 49% /dtemp[OST:119] sfs-OST0078_UUID 2064245920 1091720628 961961084 52% /dtemp[OST:120] sfs-OST0079_UUID 2064245920 1108579792 945112504 53% /dtemp[OST:121] sfs-OST007a_UUID 2064245920 996134068 1057564556 48% /dtemp[OST:122] sfs-OST007b_UUID 2064245920 1077044384 976639468 52% /dtemp[OST:123] sfs-OST007c_UUID 2064245920 1145309944 908385504 55% /dtemp[OST:124] sfs-OST007d_UUID 2064245920 1073612880 980068604 52% /dtemp[OST:125] sfs-OST007e_UUID 2064245920 1075483592 978201364 52% /dtemp[OST:126] sfs-OST007f_UUID 2064245920 1120388952 933310856 54% /dtemp[OST:127] sfs-OST0080_UUID 2064245920 1156587688 897128456 56% /dtemp[OST:128] sfs-OST0081_UUID 2064245920 1030957312 1022709824 49% /dtemp[OST:129] sfs-OST0082_UUID 2064245920 999881344 1053790940 48% /dtemp[OST:130] sfs-OST0083_UUID 2064245920 1095521408 958174240 53% /dtemp[OST:131] sfs-OST0084_UUID 2064245920 1221129792 832576716 59% /dtemp[OST:132] sfs-OST0085_UUID 2064245920 994100256 1059584780 48% /dtemp[OST:133] sfs-OST0086_UUID 2064245920 1160580632 893117576 56% /dtemp[OST:134] sfs-OST0087_UUID 2064245920 1192673900 861027648 57% /dtemp[OST:135] sfs-OST0088_UUID 2064245920 1061730172 991947440 51% /dtemp[OST:136] sfs-OST0089_UUID 2064245920 1163157924 890531400 56% /dtemp[OST:137] sfs-OST008a_UUID 2064245920 1018606752 1035074608 49% /dtemp[OST:138] sfs-OST008b_UUID 2064245920 1135650388 918033196 55% /dtemp[OST:139] sfs-OST008c_UUID 2064245920 1064232464 989443304 51% /dtemp[OST:140] sfs-OST008d_UUID 2064245920 1157718560 895964732 56% /dtemp[OST:141] sfs-OST008e_UUID 2064245920 1099315156 954369152 53% /dtemp[OST:142] sfs-OST008f_UUID 2064245920 1021270580 1032433420 49% /dtemp[OST:143] filesystem summary: 297251412480 158088513192 137265570408 53% /dtemp UUID 1K-blocks Used Available Use% Mounted on vader-mds1_UUID 1834869856 13282676 1716729584 0% /mscf[MDT:0] vader-ost1_UUID 1334535272 1068848892 252068648 80% /mscf[OST:0] vader-ost2_UUID 1334535272 1103266920 217654380 82% /mscf[OST:1] vader-ost3_UUID 1334535272 1042379244 278463360 78% /mscf[OST:2] vader-ost4_UUID 1334535272 1043839320 277043180 78% /mscf[OST:3] vader-ost5_UUID 1334535272 1069106456 251805632 80% /mscf[OST:4] vader-ost6_UUID 1334535272 1063143776 257762956 79% /mscf[OST:5] vader-ost7_UUID 1334535272 1076778096 244094912 80% /mscf[OST:6] vader-ost8_UUID 1334535272 1032947836 287954436 77% /mscf[OST:7] vader-ost9_UUID 1334535272 1087367008 233610108 81% /mscf[OST:8] vader-ost10_UUID 1334535272 1095883032 225037608 82% /mscf[OST:9] vader-ost11_UUID 1334535272 1038984844 281938072 77% /mscf[OST:10] vader-ost12_UUID 1334535272 1080016528 240874192 80% /mscf[OST:11] vader-ost13_UUID 1334535272 1023102820 297807716 76% /mscf[OST:12] vader-ost14_UUID 1334535272 979408332 341486352 73% /mscf[OST:13] vader-ost15_UUID 1334535272 1099037168 221864212 82% /mscf[OST:14] vader-ost16_UUID 1334535272 1045873940 275045200 78% /mscf[OST:15] filesystem summary: 21352564352 16949984212 4184510964 79% /mscf UUID Inodes IUsed IFree IUse% Mounted on sfs-MDT0000_UUID 474870531 17235068 457635463 3% /dtemp[MDT:0] sfs-OST0000_UUID 131072000 933107 130138893 0% /dtemp[OST:0] sfs-OST0001_UUID 131072000 909959 130162041 0% /dtemp[OST:1] sfs-OST0002_UUID 131072000 863249 130208751 0% /dtemp[OST:2] sfs-OST0003_UUID 131072000 802817 130269183 0% /dtemp[OST:3] sfs-OST0004_UUID 131072000 1005695 130066305 0% /dtemp[OST:4] sfs-OST0005_UUID 131072000 816126 130255874 0% /dtemp[OST:5] sfs-OST0006_UUID 131072000 820701 130251299 0% /dtemp[OST:6] sfs-OST0007_UUID 131072000 945117 130126883 0% /dtemp[OST:7] sfs-OST0008_UUID 131072000 988375 130083625 0% /dtemp[OST:8] sfs-OST0009_UUID 131072000 962663 130109337 0% /dtemp[OST:9] sfs-OST000a_UUID 131072000 1004796 130067204 0% /dtemp[OST:10] sfs-OST000b_UUID 131072000 908056 130163944 0% /dtemp[OST:11] sfs-OST000c_UUID 131072000 977148 130094852 0% /dtemp[OST:12] sfs-OST000d_UUID 131072000 963137 130108863 0% /dtemp[OST:13] sfs-OST000e_UUID 131072000 788005 130283995 0% /dtemp[OST:14] sfs-OST000f_UUID 131072000 716657 130355343 0% /dtemp[OST:15] sfs-OST0010_UUID 131072000 997047 130074953 0% /dtemp[OST:16] sfs-OST0011_UUID 131072000 996429 130075571 0% /dtemp[OST:17] sfs-OST0012_UUID 131072000 925896 130146104 0% /dtemp[OST:18] sfs-OST0013_UUID 131072000 974211 130097789 0% /dtemp[OST:19] sfs-OST0014_UUID 131072000 983833 130088167 0% /dtemp[OST:20] sfs-OST0015_UUID 131072000 921903 130150097 0% /dtemp[OST:21] sfs-OST0016_UUID 131072000 938759 130133241 0% /dtemp[OST:22] sfs-OST0017_UUID 131072000 778331 130293669 0% /dtemp[OST:23] sfs-OST0018_UUID 131072000 940254 130131746 0% /dtemp[OST:24] sfs-OST0019_UUID 131072000 791892 130280108 0% /dtemp[OST:25] sfs-OST001a_UUID 131072000 981569 130090431 0% /dtemp[OST:26] sfs-OST001b_UUID 131072000 953077 130118923 0% /dtemp[OST:27] sfs-OST001c_UUID 131072000 891157 130180843 0% /dtemp[OST:28] sfs-OST001d_UUID 131072000 976900 130095100 0% /dtemp[OST:29] sfs-OST001e_UUID 131072000 1040702 130031298 0% /dtemp[OST:30] sfs-OST001f_UUID 131072000 976216 130095784 0% /dtemp[OST:31] sfs-OST0020_UUID 131072000 912421 130159579 0% /dtemp[OST:32] sfs-OST0021_UUID 131072000 927803 130144197 0% /dtemp[OST:33] sfs-OST0022_UUID 131072000 945489 130126511 0% /dtemp[OST:34] sfs-OST0023_UUID 131072000 929574 130142426 0% /dtemp[OST:35] sfs-OST0024_UUID 131072000 904851 130167149 0% /dtemp[OST:36] sfs-OST0025_UUID 131072000 947878 130124122 0% /dtemp[OST:37] sfs-OST0026_UUID 131072000 954019 130117981 0% /dtemp[OST:38] sfs-OST0027_UUID 131072000 777785 130294215 0% /dtemp[OST:39] sfs-OST0028_UUID 131072000 924282 130147718 0% /dtemp[OST:40] sfs-OST0029_UUID 131072000 921669 130150331 0% /dtemp[OST:41] sfs-OST002a_UUID 131072000 956530 130115470 0% /dtemp[OST:42] sfs-OST002b_UUID 131072000 1000389 130071611 0% /dtemp[OST:43] sfs-OST002c_UUID 131072000 797964 130274036 0% /dtemp[OST:44] sfs-OST002d_UUID 131072000 986655 130085345 0% /dtemp[OST:45] sfs-OST002e_UUID 131072000 887212 130184788 0% /dtemp[OST:46] sfs-OST002f_UUID 131072000 997138 130074862 0% /dtemp[OST:47] sfs-OST0030_UUID 131072000 976089 130095911 0% /dtemp[OST:48] sfs-OST0031_UUID 131072000 898961 130173039 0% /dtemp[OST:49] sfs-OST0032_UUID 131072000 930610 130141390 0% /dtemp[OST:50] sfs-OST0033_UUID 131072000 977095 130094905 0% /dtemp[OST:51] sfs-OST0034_UUID 131072000 993845 130078155 0% /dtemp[OST:52] sfs-OST0035_UUID 131072000 911942 130160058 0% /dtemp[OST:53] sfs-OST0036_UUID 131072000 924638 130147362 0% /dtemp[OST:54] sfs-OST0037_UUID 131072000 958901 130113099 0% /dtemp[OST:55] sfs-OST0038_UUID 131072000 979253 130092747 0% /dtemp[OST:56] sfs-OST0039_UUID 131072000 980803 130091197 0% /dtemp[OST:57] sfs-OST003a_UUID 131072000 791658 130280342 0% /dtemp[OST:58] sfs-OST003b_UUID 131072000 911553 130160447 0% /dtemp[OST:59] sfs-OST003c_UUID 131072000 854410 130217590 0% /dtemp[OST:60] sfs-OST003d_UUID 131072000 966392 130105608 0% /dtemp[OST:61] sfs-OST003e_UUID 131072000 972723 130099277 0% /dtemp[OST:62] sfs-OST003f_UUID 131072000 973009 130098991 0% /dtemp[OST:63] sfs-OST0040_UUID 131072000 900419 130171581 0% /dtemp[OST:64] sfs-OST0041_UUID 131072000 899906 130172094 0% /dtemp[OST:65] sfs-OST0042_UUID 131072000 949489 130122511 0% /dtemp[OST:66] sfs-OST0043_UUID 131072000 992597 130079403 0% /dtemp[OST:67] sfs-OST0044_UUID 131072000 1001642 130070358 0% /dtemp[OST:68] sfs-OST0045_UUID 131072000 951350 130120650 0% /dtemp[OST:69] sfs-OST0046_UUID 131072000 956321 130115679 0% /dtemp[OST:70] sfs-OST0047_UUID 131072000 732851 130339149 0% /dtemp[OST:71] sfs-OST0048_UUID 131072000 927208 130144792 0% /dtemp[OST:72] sfs-OST0049_UUID 131072000 914598 130157402 0% /dtemp[OST:73] sfs-OST004a_UUID 131072000 903386 130168614 0% /dtemp[OST:74] sfs-OST004b_UUID 131072000 938516 130133484 0% /dtemp[OST:75] sfs-OST004c_UUID 131072000 985172 130086828 0% /dtemp[OST:76] sfs-OST004d_UUID 131072000 956051 130115949 0% /dtemp[OST:77] sfs-OST004e_UUID 131072000 946741 130125259 0% /dtemp[OST:78] sfs-OST004f_UUID 131072000 897018 130174982 0% /dtemp[OST:79] sfs-OST0050_UUID 131072000 854290 130217710 0% /dtemp[OST:80] sfs-OST0051_UUID 131072000 958563 130113437 0% /dtemp[OST:81] sfs-OST0052_UUID 131072000 888145 130183855 0% /dtemp[OST:82] sfs-OST0053_UUID 131072000 798914 130273086 0% /dtemp[OST:83] sfs-OST0054_UUID 131072000 994307 130077693 0% /dtemp[OST:84] sfs-OST0055_UUID 131072000 876102 130195898 0% /dtemp[OST:85] sfs-OST0056_UUID 131072000 992537 130079463 0% /dtemp[OST:86] sfs-OST0057_UUID 131072000 1025381 130046619 0% /dtemp[OST:87] sfs-OST0058_UUID 131072000 883171 130188829 0% /dtemp[OST:88] sfs-OST0059_UUID 131072000 963811 130108189 0% /dtemp[OST:89] sfs-OST005a_UUID 131072000 956580 130115420 0% /dtemp[OST:90] sfs-OST005b_UUID 131072000 1018335 130053665 0% /dtemp[OST:91] sfs-OST005c_UUID 131072000 921394 130150606 0% /dtemp[OST:92] sfs-OST005d_UUID 131072000 997103 130074897 0% /dtemp[OST:93] sfs-OST005e_UUID 131072000 959395 130112605 0% /dtemp[OST:94] sfs-OST005f_UUID 131072000 957806 130114194 0% /dtemp[OST:95] sfs-OST0060_UUID 131072000 961372 130110628 0% /dtemp[OST:96] sfs-OST0061_UUID 131072000 876284 130195716 0% /dtemp[OST:97] sfs-OST0062_UUID 131072000 976349 130095651 0% /dtemp[OST:98] sfs-OST0063_UUID 131072000 927210 130144790 0% /dtemp[OST:99] sfs-OST0064_UUID 131072000 942963 130129037 0% /dtemp[OST:100] sfs-OST0065_UUID 131072000 927662 130144338 0% /dtemp[OST:101] sfs-OST0066_UUID 131072000 993961 130078039 0% /dtemp[OST:102] sfs-OST0067_UUID 131072000 952107 130119893 0% /dtemp[OST:103] sfs-OST0068_UUID 131072000 960114 130111886 0% /dtemp[OST:104] sfs-OST0069_UUID 131072000 1019083 130052917 0% /dtemp[OST:105] sfs-OST006a_UUID 131072000 1004608 130067392 0% /dtemp[OST:106] sfs-OST006b_UUID 131072000 891039 130180961 0% /dtemp[OST:107] sfs-OST006c_UUID 131072000 969055 130102945 0% /dtemp[OST:108] sfs-OST006d_UUID 131072000 963317 130108683 0% /dtemp[OST:109] sfs-OST006e_UUID 131072000 994995 130077005 0% /dtemp[OST:110] sfs-OST006f_UUID 131072000 887923 130184077 0% /dtemp[OST:111] sfs-OST0070_UUID 131072000 968551 130103449 0% /dtemp[OST:112] sfs-OST0071_UUID 131072000 947606 130124394 0% /dtemp[OST:113] sfs-OST0072_UUID 131072000 941221 130130779 0% /dtemp[OST:114] sfs-OST0073_UUID 131072000 912978 130159022 0% /dtemp[OST:115] sfs-OST0074_UUID 131072000 866475 130205525 0% /dtemp[OST:116] sfs-OST0075_UUID 131072000 943833 130128167 0% /dtemp[OST:117] sfs-OST0076_UUID 131072000 966207 130105793 0% /dtemp[OST:118] sfs-OST0077_UUID 131072000 925939 130146061 0% /dtemp[OST:119] sfs-OST0078_UUID 131072000 878610 130193390 0% /dtemp[OST:120] sfs-OST0079_UUID 131072000 972294 130099706 0% /dtemp[OST:121] sfs-OST007a_UUID 131072000 983109 130088891 0% /dtemp[OST:122] sfs-OST007b_UUID 131072000 992429 130079571 0% /dtemp[OST:123] sfs-OST007c_UUID 131072000 884171 130187829 0% /dtemp[OST:124] sfs-OST007d_UUID 131072000 984967 130087033 0% /dtemp[OST:125] sfs-OST007e_UUID 131072000 974043 130097957 0% /dtemp[OST:126] sfs-OST007f_UUID 131072000 904518 130167482 0% /dtemp[OST:127] sfs-OST0080_UUID 131072000 803345 130268655 0% /dtemp[OST:128] sfs-OST0081_UUID 131072000 988062 130083938 0% /dtemp[OST:129] sfs-OST0082_UUID 131072000 952675 130119325 0% /dtemp[OST:130] sfs-OST0083_UUID 131072000 910066 130161934 0% /dtemp[OST:131] sfs-OST0084_UUID 131072000 895140 130176860 0% /dtemp[OST:132] sfs-OST0085_UUID 131072000 1036804 130035196 0% /dtemp[OST:133] sfs-OST0086_UUID 131072000 892136 130179864 0% /dtemp[OST:134] sfs-OST0087_UUID 131072000 809896 130262104 0% /dtemp[OST:135] sfs-OST0088_UUID 131072000 786857 130285143 0% /dtemp[OST:136] sfs-OST0089_UUID 131072000 789836 130282164 0% /dtemp[OST:137] sfs-OST008a_UUID 131072000 1034379 130037621 0% /dtemp[OST:138] sfs-OST008b_UUID 131072000 887163 130184837 0% /dtemp[OST:139] sfs-OST008c_UUID 131072000 975873 130096127 0% /dtemp[OST:140] sfs-OST008d_UUID 131072000 921621 130150379 0% /dtemp[OST:141] sfs-OST008e_UUID 131072000 992654 130079346 0% /dtemp[OST:142] sfs-OST008f_UUID 131072000 936173 130135827 0% /dtemp[OST:143] filesystem summary: 474870531 17235068 457635463 3% /dtemp UUID Inodes IUsed IFree IUse% Mounted on vader-mds1_UUID 486297503 30900708 455396795 6% /mscf[MDT:0] vader-ost1_UUID 68167938 1746343 66421595 2% /mscf[OST:0] vader-ost2_UUID 59548367 1730686 57817681 2% /mscf[OST:1] vader-ost3_UUID 74872494 1833651 73038843 2% /mscf[OST:2] vader-ost4_UUID 74466464 1792689 72673775 2% /mscf[OST:3] vader-ost5_UUID 68137249 1780726 66356523 2% /mscf[OST:4] vader-ost6_UUID 69541748 1694569 67847179 2% /mscf[OST:5] vader-ost7_UUID 66083245 1644111 64439134 2% /mscf[OST:6] vader-ost8_UUID 77159492 1762644 75396848 2% /mscf[OST:7] vader-ost9_UUID 63563009 1770994 61792015 2% /mscf[OST:8] vader-ost10_UUID 61284780 1621740 59663040 2% /mscf[OST:9] vader-ost11_UUID 75684308 1796701 73887607 2% /mscf[OST:10] vader-ost12_UUID 65304005 1674597 63629408 2% /mscf[OST:11] vader-ost13_UUID 79549121 1690897 77858224 2% /mscf[OST:12] vader-ost14_UUID 84738048 1577666 83160382 1% /mscf[OST:13] vader-ost15_UUID 60569734 1695209 58874525 2% /mscf[OST:14] vader-ost16_UUID 74029395 1864062 72165333 2% /mscf[OST:15] filesystem summary: 486297503 30900708 455396795 6% /mscf -----Original Message----- From: lustre-devel-bounces at lists.lustre.org [mailto:lustre-devel-bounces at lists.lustre.org] On Behalf Of Andreas Dilger Sent: Thursday, April 21, 2011 11:40 AM To: lustre-discuss at lists.lustre.org discuss; lustre-devel at lists.lustre.org Mailing List Subject: [Lustre-devel] Research on filesystem metadata operation distribution I''m trying to get some data about the relative distribution of MDS operations in the wild, and I''d be grateful if some people with production filesystems that have been running for at least a week could collect some simple stats and email them to me. They can be collected by any regular user on the MDS node: lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" It would be useful to also include "lfs df" and "lfs df -i" information, as well as a brief description of what the filesystem is used for (scratch, home, project, archive, etc). As a reminder, I''m also interested if some Lustre admins could run the "fsstats" tool from http://www.pdsi-scidac.org/fsstats/ and send me the output. Sending the output to PDSI via their submission form may also produce some positive results. http://www.pdsi-scidac.org/fsstats/files/fsstats-1.4.5.tar.gz Thanks in advance for any data. I''ve set replies to go only to lustre-devel, to avoid clogging the larger readership of lustre-discuss, but it may be useful for others to have this in a list archive and/or searchable via Google in the future so I don''t necessarily want to keep it all to myself. Cheers, Andreas -- Andreas Dilger Principal Engineer Whamcloud, Inc. _______________________________________________ Lustre-devel mailing list Lustre-devel at lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-devel
Christopher J. Walker
2011-Apr-22 18:47 UTC
[Lustre-devel] Research on filesystem metadata operation distribution
On 21/04/11 19:40, Andreas Dilger wrote:> I''m trying to get some data about the relative distribution of MDS operations in the wild, and I''d be grateful if some people with production filesystems that have been running for at least a week could collect some simple stats and email them to me. They can be collected by any regular user on the MDS node:The filesystem lustre_0 (lustre_1 is used for small tests) is used for storage of data from the LHC at CERN. We are a WLCG Tier-2 site and uses the "StoRM" SRM implementation. The filesystem is mostly used for storage of datasets around 2Gig in size, but the software area is also on Lustre (except for our main customer, ATLAS, where the mds was a bottleneck, so we now use the CVMFS caching filesystem for them). The filesystem has been up 18 days, but under somewhat reduced load from normal due to problems with our airconditioning.> > lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" >[root at sn01 ~]# lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" open 6483556050 samples [reqs] close 298937510 samples [reqs] link 207 samples [reqs] unlink 1642197 samples [reqs] rename 74790 samples [reqs] getattr 604826669 samples [reqs] setattr 732241 samples [reqs] getxattr 2500316 samples [reqs] setxattr 2869526 samples [reqs] sync 734 samples [reqs] open 78316 samples [reqs] close 22048 samples [reqs] getattr 139519 samples [reqs] getxattr 1 samples [reqs]> It would be useful to also include "lfs df" and "lfs df -i" information, as well as a brief description of what the filesystem is used for (scratch, home, project, archive, etc).[root at fe08 ~]# lfs df UUID 1K-blocks Used Available Use% Mounted on lustre_0-MDT0000_UUID 1279344884 16915292 1262429592 1% /mnt/lustre_0[MDT:0] lustre_0-OST0000_UUID 6486115712 5350727580 1135388004 82% /mnt/lustre_0[OST:0] lustre_0-OST0001_UUID 6486115712 5254948988 1231166596 81% /mnt/lustre_0[OST:1] lustre_0-OST0002_UUID 6486115712 5369069128 1117046456 82% /mnt/lustre_0[OST:2] lustre_0-OST0003_UUID 6486115712 5312081284 1174034300 81% /mnt/lustre_0[OST:3] lustre_0-OST0004_UUID 6486115712 5313935872 1172179776 81% /mnt/lustre_0[OST:4] lustre_0-OST0005_UUID 6486115712 5386002008 1100113576 83% /mnt/lustre_0[OST:5] lustre_0-OST0006_UUID 6486115712 5269309656 1216805928 81% /mnt/lustre_0[OST:6] lustre_0-OST0007_UUID 6486115712 5328370940 1157744644 82% /mnt/lustre_0[OST:7] lustre_0-OST0008_UUID 6486115712 5347529932 1138585652 82% /mnt/lustre_0[OST:8] lustre_0-OST0009_UUID 6486115712 5341504240 1144611344 82% /mnt/lustre_0[OST:9] lustre_0-OST000a_UUID 6486115712 5333294764 1152820820 82% /mnt/lustre_0[OST:10] lustre_0-OST000b_UUID 6486115712 5372246460 1113869124 82% /mnt/lustre_0[OST:11] lustre_0-OST000c_UUID 6486115712 5285503076 1200611484 81% /mnt/lustre_0[OST:12] lustre_0-OST000d_UUID 6486115712 5289776224 1196339360 81% /mnt/lustre_0[OST:13] lustre_0-OST000e_UUID 6486115712 5367755168 1118360416 82% /mnt/lustre_0[OST:14] lustre_0-OST000f_UUID 6486115712 5178737856 1307377792 79% /mnt/lustre_0[OST:15] lustre_0-OST0010_UUID 6486115712 5301360576 1184755072 81% /mnt/lustre_0[OST:16] lustre_0-OST0011_UUID 6486115712 5246933508 1239182076 80% /mnt/lustre_0[OST:17] lustre_0-OST0012_UUID 6486115712 5340756896 1145358688 82% /mnt/lustre_0[OST:18] lustre_0-OST0013_UUID 6486115712 5227080140 1259035444 80% /mnt/lustre_0[OST:19] lustre_0-OST0014_UUID 6486115712 5324360452 1161755132 82% /mnt/lustre_0[OST:20] lustre_0-OST0015_UUID 6486115712 5304634880 1181480704 81% /mnt/lustre_0[OST:21] lustre_0-OST0016_UUID 6486115712 5342578956 1143536628 82% /mnt/lustre_0[OST:22] lustre_0-OST0017_UUID 6486115712 5352854748 1133260900 82% /mnt/lustre_0[OST:23] lustre_0-OST0018_UUID 6486115712 5341708760 1144406760 82% /mnt/lustre_0[OST:24] lustre_0-OST0019_UUID 6486115712 5332733132 1153382452 82% /mnt/lustre_0[OST:25] lustre_0-OST001a_UUID 6486115712 5366956328 1119159256 82% /mnt/lustre_0[OST:26] lustre_0-OST001b_UUID 6486115712 5251430884 1234684764 80% /mnt/lustre_0[OST:27] lustre_0-OST001c_UUID 6486115712 5166970324 1319145260 79% /mnt/lustre_0[OST:28] lustre_0-OST001d_UUID 6486115712 5344269684 1141845900 82% /mnt/lustre_0[OST:29] lustre_0-OST001e_UUID 6486115712 5230794024 1255321624 80% /mnt/lustre_0[OST:30] lustre_0-OST001f_UUID 6486115712 5402916936 1083198648 83% /mnt/lustre_0[OST:31] lustre_0-OST0020_UUID 6486115712 5263884340 1222231308 81% /mnt/lustre_0[OST:32] lustre_0-OST0021_UUID 6486115712 5376336376 1109778888 82% /mnt/lustre_0[OST:33] lustre_0-OST0022_UUID 6486115712 5347838416 1138277168 82% /mnt/lustre_0[OST:34] lustre_0-OST0023_UUID 6486115712 5215511904 1270603744 80% /mnt/lustre_0[OST:35] lustre_0-OST0024_UUID 6486115712 5366457356 1119658228 82% /mnt/lustre_0[OST:36] lustre_0-OST0025_UUID 6486115712 5292414492 1193701092 81% /mnt/lustre_0[OST:37] lustre_0-OST0026_UUID 6486115712 5271663256 1214452328 81% /mnt/lustre_0[OST:38] lustre_0-OST0027_UUID 6486115712 5263536792 1222578792 81% /mnt/lustre_0[OST:39] lustre_0-OST0028_UUID 6486115712 5340305084 1145810500 82% /mnt/lustre_0[OST:40] lustre_0-OST0029_UUID 6486115712 5394190360 1091925224 83% /mnt/lustre_0[OST:41] lustre_0-OST002a_UUID 6486115712 5335685244 1150429316 82% /mnt/lustre_0[OST:42] lustre_0-OST002b_UUID 6486115712 5106503104 1379612480 78% /mnt/lustre_0[OST:43] lustre_0-OST002c_UUID 6486115712 5152124056 1333991592 79% /mnt/lustre_0[OST:44] lustre_0-OST002d_UUID 6486115712 5288720908 1197394740 81% /mnt/lustre_0[OST:45] lustre_0-OST002e_UUID 6486115712 5065686264 1420429384 78% /mnt/lustre_0[OST:46] lustre_0-OST002f_UUID 6486115712 5370371776 1115743872 82% /mnt/lustre_0[OST:47] filesystem summary: 311333554176 254430363132 56903183236 81% /mnt/lustre_0 UUID 1K-blocks Used Available Use% Mounted on lustre_1-MDT0000_UUID 1279344884 539236 1278805648 0% /mnt/lustre_1[MDT:0] lustre_1-OST0000_UUID 7930215400 4302650476 3627564860 54% /mnt/lustre_1[OST:0] lustre_1-OST0001_UUID 7930215400 4327747852 3602467484 54% /mnt/lustre_1[OST:1] lustre_1-OST0002_UUID 7930215400 4424687248 3505528088 55% /mnt/lustre_1[OST:2] lustre_1-OST0003_UUID 7930215400 4233067620 3697147716 53% /mnt/lustre_1[OST:3] filesystem summary: 31720861600 17288153196 14432708148 54% /mnt/lustre_1 [root at fe08 ~]# lfs df -i UUID Inodes IUsed IFree IUse% Mounted on lustre_0-MDT0000_UUID 344890671 29283273 315607398 8% /mnt/lustre_0[MDT:0] lustre_0-OST0000_UUID 284348159 501126 283847033 0% /mnt/lustre_0[OST:0] lustre_0-OST0001_UUID 308308277 516596 307791681 0% /mnt/lustre_0[OST:1] lustre_0-OST0002_UUID 279762988 501342 279261646 0% /mnt/lustre_0[OST:2] lustre_0-OST0003_UUID 294035715 527108 293508607 0% /mnt/lustre_0[OST:3] lustre_0-OST0004_UUID 293569946 524986 293044960 0% /mnt/lustre_0[OST:4] lustre_0-OST0005_UUID 275499122 470696 275028426 0% /mnt/lustre_0[OST:5] lustre_0-OST0006_UUID 304750117 548603 304201514 0% /mnt/lustre_0[OST:6] lustre_0-OST0007_UUID 289986672 550479 289436193 0% /mnt/lustre_0[OST:7] lustre_0-OST0008_UUID 285181040 534595 284646445 0% /mnt/lustre_0[OST:8] lustre_0-OST0009_UUID 286682099 529231 286152868 0% /mnt/lustre_0[OST:9] lustre_0-OST000a_UUID 288750615 545378 288205237 0% /mnt/lustre_0[OST:10] lustre_0-OST000b_UUID 278999894 532581 278467313 0% /mnt/lustre_0[OST:11] lustre_0-OST000c_UUID 300687110 533951 300153159 0% /mnt/lustre_0[OST:12] lustre_0-OST000d_UUID 299617234 532362 299084872 0% /mnt/lustre_0[OST:13] lustre_0-OST000e_UUID 280075762 485626 279590136 0% /mnt/lustre_0[OST:14] lustre_0-OST000f_UUID 327385455 540991 326844464 0% /mnt/lustre_0[OST:15] lustre_0-OST0010_UUID 296705745 516961 296188784 0% /mnt/lustre_0[OST:16] lustre_0-OST0011_UUID 310345458 549907 309795551 0% /mnt/lustre_0[OST:17] lustre_0-OST0012_UUID 286866260 526556 286339704 0% /mnt/lustre_0[OST:18] lustre_0-OST0013_UUID 315308235 549342 314758893 0% /mnt/lustre_0[OST:19] lustre_0-OST0014_UUID 290967510 528695 290438815 0% /mnt/lustre_0[OST:20] lustre_0-OST0015_UUID 295903066 532858 295370208 0% /mnt/lustre_0[OST:21] lustre_0-OST0016_UUID 286398762 514573 285884189 0% /mnt/lustre_0[OST:22] lustre_0-OST0017_UUID 283848065 532824 283315241 0% /mnt/lustre_0[OST:23] lustre_0-OST0018_UUID 286663505 561767 286101738 0% /mnt/lustre_0[OST:24] lustre_0-OST0019_UUID 288896723 551078 288345645 0% /mnt/lustre_0[OST:25] lustre_0-OST001a_UUID 280340068 550222 279789846 0% /mnt/lustre_0[OST:26] lustre_0-OST001b_UUID 309218724 547517 308671207 0% /mnt/lustre_0[OST:27] lustre_0-OST001c_UUID 330304954 518607 329786347 0% /mnt/lustre_0[OST:28] lustre_0-OST001d_UUID 285967857 506350 285461507 0% /mnt/lustre_0[OST:29] lustre_0-OST001e_UUID 314350650 520228 313830422 0% /mnt/lustre_0[OST:30] lustre_0-OST001f_UUID 271317367 517673 270799694 0% /mnt/lustre_0[OST:31] lustre_0-OST0020_UUID 306106565 548722 305557843 0% /mnt/lustre_0[OST:32] lustre_0-OST0021_UUID 277913188 468354 277444834 0% /mnt/lustre_0[OST:33] lustre_0-OST0022_UUID 285070365 501041 284569324 0% /mnt/lustre_0[OST:34] lustre_0-OST0023_UUID 318210059 559107 317650952 0% /mnt/lustre_0[OST:35] lustre_0-OST0024_UUID 280428253 513664 279914589 0% /mnt/lustre_0[OST:36] lustre_0-OST0025_UUID 298958894 533589 298425305 0% /mnt/lustre_0[OST:37] lustre_0-OST0026_UUID 304151287 538173 303613114 0% /mnt/lustre_0[OST:38] lustre_0-OST0027_UUID 306168127 523397 305644730 0% /mnt/lustre_0[OST:39] lustre_0-OST0028_UUID 286925194 472537 286452657 0% /mnt/lustre_0[OST:40] lustre_0-OST0029_UUID 273508145 526807 272981338 0% /mnt/lustre_0[OST:41] lustre_0-OST002a_UUID 288090585 482968 287607617 0% /mnt/lustre_0[OST:42] lustre_0-OST002b_UUID 345436810 533658 344903152 0% /mnt/lustre_0[OST:43] lustre_0-OST002c_UUID 334033778 535864 333497914 0% /mnt/lustre_0[OST:44] lustre_0-OST002d_UUID 299895732 547031 299348701 0% /mnt/lustre_0[OST:45] lustre_0-OST002e_UUID 355630432 523070 355107362 0% /mnt/lustre_0[OST:46] lustre_0-OST002f_UUID 279499831 563847 278935984 0% /mnt/lustre_0[OST:47] filesystem summary: 344890671 29283273 315607398 8% /mnt/lustre_0 UUID Inodes IUsed IFree IUse% Mounted on lustre_1-MDT0000_UUID 319898525 197113 319701412 0% /mnt/lustre_1[MDT:0] lustre_1-OST0000_UUID 503545856 48975 503496881 0% /mnt/lustre_1[OST:0] lustre_1-OST0001_UUID 503545856 49081 503496775 0% /mnt/lustre_1[OST:1] lustre_1-OST0002_UUID 503545856 49098 503496758 0% /mnt/lustre_1[OST:2] lustre_1-OST0003_UUID 503545856 49133 503496723 0% /mnt/lustre_1[OST:3] filesystem summary: 319898525 197113 319701412 0% /mnt/lustre_1> > As a reminder, I''m also interested if some Lustre admins could run the "fsstats" tool from http://www.pdsi-scidac.org/fsstats/ and send me the output. Sending the output to PDSI via their submission form may also produce some positive results. > > http://www.pdsi-scidac.org/fsstats/files/fsstats-1.4.5.tar.gz > > > Thanks in advance for any data. I''ve set replies to go only to lustre-devel, to avoid clogging the larger readership of lustre-discuss, but it may be useful for others to have this in a list archive and/or searchable via Google in the future so I don''t necessarily want to keep it all to myself.Hope that''s useful for you. Chris PS I''m a different Chris Walker from the one who posted from Harvard... -- Dr Christopher J. Walker School of Physics Queen Mary, University of London
Nathan Rutman
2011-Apr-22 22:41 UTC
[Lustre-discuss] [Lustre-devel] Research on filesystem metadata operation distribution
Hmm - also ''uptime'' would really be nice to have with these so we could estimate rough rates... On Apr 21, 2011, at 11:40 AM, Andreas Dilger wrote:> I''m trying to get some data about the relative distribution of MDS operations in the wild, and I''d be grateful if some people with production filesystems that have been running for at least a week could collect some simple stats and email them to me. They can be collected by any regular user on the MDS node: > > lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" > > It would be useful to also include "lfs df" and "lfs df -i" information, as well as a brief description of what the filesystem is used for (scratch, home, project, archive, etc). > > > > As a reminder, I''m also interested if some Lustre admins could run the "fsstats" tool from http://www.pdsi-scidac.org/fsstats/ and send me the output. Sending the output to PDSI via their submission form may also produce some positive results. > > http://www.pdsi-scidac.org/fsstats/files/fsstats-1.4.5.tar.gz > > > Thanks in advance for any data. I''ve set replies to go only to lustre-devel, to avoid clogging the larger readership of lustre-discuss, but it may be useful for others to have this in a list archive and/or searchable via Google in the future so I don''t necessarily want to keep it all to myself. > > Cheers, Andreas > -- > Andreas Dilger > Principal Engineer > Whamcloud, Inc. > > > > _______________________________________________ > Lustre-devel mailing list > Lustre-devel at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-devel______________________________________________________________________ This email may contain privileged or confidential information, which should only be used for the purpose for which it was sent by Xyratex. No further rights or licenses are granted to use such information. If you are not the intended recipient of this message, please notify the sender by return and delete it. You may not use, copy, disclose or rely on the information contained in it. Internet email is susceptible to data corruption, interception and unauthorised amendment for which Xyratex does not accept liability. While we have taken reasonable precautions to ensure that this email is free of viruses, Xyratex does not accept liability for the presence of any computer viruses in this email, nor for any losses caused as a result of viruses. Xyratex Technology Limited (03134912), Registered in England & Wales, Registered Office, Langstone Road, Havant, Hampshire, PO9 1SA. The Xyratex group of companies also includes, Xyratex Ltd, registered in Bermuda, Xyratex International Inc, registered in California, Xyratex (Malaysia) Sdn Bhd registered in Malaysia, Xyratex Technology (Wuxi) Co Ltd registered in The People''s Republic of China and Xyratex Japan Limited registered in Japan. ______________________________________________________________________
Nirmal Seenu
2011-Apr-25 15:45 UTC
[Lustre-devel] Research on filesystem metadata operation distribution
Attached is the stats from the cosmology cluster lustre file system. We use this filesystem as a general purpose storage system to store the outputs, binaries and code. The storage space is not backed up and users have to write the files manually to the tape if needed. Nirmal -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cc-lustre-stats Url: http://lists.lustre.org/pipermail/lustre-devel/attachments/20110425/0d396934/attachment-0003.pl -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cc-lustre-fsstats Url: http://lists.lustre.org/pipermail/lustre-devel/attachments/20110425/0d396934/attachment-0004.pl -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cc-lustre-fsstat-output Url: http://lists.lustre.org/pipermail/lustre-devel/attachments/20110425/0d396934/attachment-0005.pl
Nirmal Seenu
2011-Apr-25 15:48 UTC
[Lustre-devel] Research on filesystem metadata operation distribution
Attached is the stats from the LQCD cluster lustre file system. We use this filesystem as a general purpose storage system to store the outputs. The storage space is not backed up and users have to write the files manually to the tape if needed. Nirmal -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: lqcd-lustre-stats Url: http://lists.lustre.org/pipermail/lustre-devel/attachments/20110425/c7d0782c/attachment-0003.pl -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: lqcd-lustre-fsstats Url: http://lists.lustre.org/pipermail/lustre-devel/attachments/20110425/c7d0782c/attachment-0004.pl -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: lqcd-lustre-fsstat-output Url: http://lists.lustre.org/pipermail/lustre-devel/attachments/20110425/c7d0782c/attachment-0005.pl
Thomas Roth
2011-May-05 17:54 UTC
[Lustre-devel] [Lustre-discuss] Research on filesystem metadata operation distribution
At GSI, we have lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" open 10302752480 samples [reqs] close 528292519 samples [reqs] unlink 22292174 samples [reqs] rename 542512 samples [reqs] getxattr 408511496 samples [reqs] setxattr 838368 samples [reqs] setattr 27097846 samples [reqs] getattr 738233548 samples [reqs] Output of ''lfs df'' is attached. The filesystem is used for storing HEP data from theory calculations, simulations and HEP experimental data for use in analysis. People also use it as a software repository, to compile their programs (ouch) and as a general purpose distributed file system (a certain sysadmin is known to store his music files there). Regards, Thomas On 04/21/2011 08:40 PM, Andreas Dilger wrote:> I''m trying to get some data about the relative distribution of MDS operations in the wild, and I''d be grateful if some people with production filesystems that have been running for at least a week could collect some simple stats and email them to me. They can be collected by any regular user on the MDS node: > > lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" > > It would be useful to also include "lfs df" and "lfs df -i" information, as well as a brief description of what the filesystem is used for (scratch, home, project, archive, etc). > > > > As a reminder, I''m also interested if some Lustre admins could run the "fsstats" tool from http://www.pdsi-scidac.org/fsstats/ and send me the output. Sending the output to PDSI via their submission form may also produce some positive results. > > http://www.pdsi-scidac.org/fsstats/files/fsstats-1.4.5.tar.gz > > > Thanks in advance for any data. I''ve set replies to go only to lustre-devel, to avoid clogging the larger readership of lustre-discuss, but it may be useful for others to have this in a list archive and/or searchable via Google in the future so I don''t necessarily want to keep it all to myself. > > Cheers, Andreas > -- > Andreas Dilger > Principal Engineer > Whamcloud, Inc. > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss-- -------------------------------------------------------------------- Thomas Roth Department: Informationstechnologie GSI Helmholtzzentrum f?r Schwerionenforschung GmbH Planckstra?e 1 64291 Darmstadt www.gsi.de Gesellschaft mit beschr?nkter Haftung Sitz der Gesellschaft: Darmstadt Handelsregister: Amtsgericht Darmstadt, HRB 1528 Gesch?ftsf?hrung: Professor Dr. Dr. h.c. Horst St?cker, Dr. Hartmut Eickhoff Vorsitzende des Aufsichtsrates: Dr. Beatrix Vierkorn-Rudolph Stellvertreter: Ministerialdirigent Dr. Rolf Bernhardt -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: lfs_df Url: http://lists.lustre.org/pipermail/lustre-devel/attachments/20110505/c14cc51a/attachment-0002.pl -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: lfs_df_i Url: http://lists.lustre.org/pipermail/lustre-devel/attachments/20110505/c14cc51a/attachment-0003.pl
Andreas Dilger
2011-May-05 18:05 UTC
[Lustre-devel] Research on filesystem metadata operation distribution
On May 5, 2011, at 11:54, Thomas Roth wrote:> At GSI, we have > > lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" > open 10302752480 samples [reqs] > close 528292519 samples [reqs] > unlink 22292174 samples [reqs] > rename 542512 samples [reqs] > getxattr 408511496 samples [reqs] > setxattr 838368 samples [reqs] > setattr 27097846 samples [reqs] > getattr 738233548 samples [reqs] > > Output of ''lfs df'' is attached.While it wasn''t my original goal in asking for this data, a question I''ve been asking all of the sites that have filesystems with OSTs of different sizes is whether they use (and/or continue to use) an external process for balancing the space used on the OSTs (e.g. migrating of files, or marking OSTs inactive on the MDS once they have reached some threshold of space usage), or if the existing space balancing mechanism in the MDS was able to get this relatively uniform space utilization?> The filesystem is used for storing HEP data from theory calculations, simulations and HEP experimental data for use in analysis. People also use it as a software repository, to compile their programs (ouch) and as a general purpose distributed file system (a certain sysadmin is known to store his music files there). > > Regards, > Thomas > > On 04/21/2011 08:40 PM, Andreas Dilger wrote: >> I''m trying to get some data about the relative distribution of MDS operations in the wild, and I''d be grateful if some people with production filesystems that have been running for at least a week could collect some simple stats and email them to me. They can be collected by any regular user on the MDS node: >> >> lctl get_param mds.*.stats | egrep "open|close|rename|link|attr|sync" >> >> It would be useful to also include "lfs df" and "lfs df -i" information, as well as a brief description of what the filesystem is used for (scratch, home, project, archive, etc). >> >> >> >> As a reminder, I''m also interested if some Lustre admins could run the "fsstats" tool from http://www.pdsi-scidac.org/fsstats/ and send me the output. Sending the output to PDSI via their submission form may also produce some positive results. >> >> http://www.pdsi-scidac.org/fsstats/files/fsstats-1.4.5.tar.gz >> >> >> Thanks in advance for any data. I''ve set replies to go only to lustre-devel, to avoid clogging the larger readership of lustre-discuss, but it may be useful for others to have this in a list archive and/or searchable via Google in the future so I don''t necessarily want to keep it all to myself. >> >> Cheers, Andreas >> -- >> Andreas Dilger >> Principal Engineer >> Whamcloud, Inc. >> >> >> >> _______________________________________________ >> Lustre-discuss mailing list >> Lustre-discuss at lists.lustre.org >> http://lists.lustre.org/mailman/listinfo/lustre-discuss > > > -- > -------------------------------------------------------------------- > Thomas Roth > Department: Informationstechnologie > > GSI Helmholtzzentrum f?r Schwerionenforschung GmbH > Planckstra?e 1 > 64291 Darmstadt > www.gsi.de > > Gesellschaft mit beschr?nkter Haftung > Sitz der Gesellschaft: Darmstadt > Handelsregister: Amtsgericht Darmstadt, HRB 1528 > > Gesch?ftsf?hrung: Professor Dr. Dr. h.c. Horst St?cker, > Dr. Hartmut Eickhoff > > Vorsitzende des Aufsichtsrates: Dr. Beatrix Vierkorn-Rudolph > Stellvertreter: Ministerialdirigent Dr. Rolf Bernhardt > > <lfs_df.txt><lfs_df_i.txt>_______________________________________________ > Lustre-devel mailing list > Lustre-devel at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-develCheers, Andreas -- Andreas Dilger Principal Engineer Whamcloud, Inc.