search for: ost0005_uuid

Displaying 7 results from an estimated 7 matches for "ost0005_uuid".

Did you mean: ost0001_uuid
2013 Mar 18
1
lustre showing inactive devices
...276.1M 5.3G 5% /mnt/lustre[OST:1] lustre-OST0002_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:2] lustre-OST0003_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:3] lustre-OST0004_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:4] lustre-OST0005_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:5] lustre-OST0006_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:6] lustre-OST0007_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:7] lustre-OST0008_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre...
2013 Mar 18
1
OST0006 : inactive device
...G 276.1M 5.3G 5% /mnt/lustre[OST:1] lustre-OST0002_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:2] lustre-OST0003_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:3] lustre-OST0004_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:4] lustre-OST0005_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:5] lustre-OST0006_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:6] lustre-OST0007_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:7] lustre-OST0008_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OS...
2010 Jul 08
5
No space left on device on not full filesystem
...001_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:1] lustre-OST0002_UUID 916.9G 44.9G 825.5G 4% /mnt/lustre[OST:2] lustre-OST0003_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:3] lustre-OST0004_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:4] lustre-OST0005_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:5] lustre-OST0006_UUID 916.9G 44.5G 825.9G 4% /mnt/lustre[OST:6] lustre-OST0007_UUID 916.9G 44.6G 825.8G 4% /mnt/lustre[OST:7] lustre-OST0008_UUID 916.9G 44.5G 825.8G 4% /mnt/lustre[OST:8] filesystem sum...
2013 Apr 29
1
OSTs inactive on one client (only)
...ore, but without a very satisfactory answer. One of our half a dozen clients has lost access to a set of OSTs: > lfs osts OBDS:: 0: lustre-OST0000_UUID ACTIVE 1: lustre-OST0001_UUID ACTIVE 2: lustre-OST0002_UUID INACTIVE 3: lustre-OST0003_UUID INACTIVE 4: lustre-OST0004_UUID INACTIVE 5: lustre-OST0005_UUID ACTIVE 6: lustre-OST0006_UUID ACTIVE All OSTs show as completely fine on the other clients, and the system is working there. In addition, I have run numerous checks of the IB network (ibhosts, ibping, etc.), and I do not see any networking issues. Moreover, the OSSs include: OSS #1 -->...
2012 Sep 27
4
Bad reporting inodes free
...73280 17822532 1250748 93% /mnt/data[OST:1] cetafs-OST0002_UUID 19073280 17822560 1250720 93% /mnt/data[OST:2] cetafs-OST0003_UUID 19073280 17822622 1250658 93% /mnt/data[OST:3] cetafs-OST0004_UUID 19073280 17822181 1251099 93% /mnt/data[OST:4] cetafs-OST0005_UUID 19073280 17822769 1250511 93% /mnt/data[OST:5] cetafs-OST0006_UUID 19073280 17822378 1250902 93% /mnt/data[OST:6] cetafs-OST0007_UUID 19073280 17822131 1251149 93% /mnt/data[OST:7] cetafs-OST0008_UUID 19073280 17822419 1250861 93% /mnt/data[OST:8]...
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi, We run four-node Lustre 2.3, and I needed to both change hardware under MGS/MDS and reassign an OSS ip. Just the same, I added a brand new 10GE network to the system, which was the reason for MDS hardware change. I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual, and everything mounts fine. Log regeneration apparently works, since it seems to do something, but
2013 Feb 12
2
Lost folders after changing MDS
...OST0001_UUID 16.4T 1.8T 13.7T 12% /data[OST:1] AC3-OST0002_UUID 6.4T 6.0T 49.2G 99% /data[OST:2] AC3-OST0003_UUID 6.4T 6.1T 912.9M 100% /data[OST:3] AC3-OST0004_UUID 4.3T 4.1T 17.2G 100% /data[OST:4] AC3-OST0005_UUID 1.9T 1.8T 29.0M 100% /data[OST:5] AC3-OST0006_UUID 1.9T 1.5T 282.1G 85% /data[OST:6] AC3-OST0007_UUID 1.9T 1.8T 434.3M 100% /data[OST:7] AC3-OST0008_UUID 1.9T 1.8T 12.9G 99% /data[OST:8] OST0009...