Displaying 10 results from an estimated 10 matches for "ost0003_uuid".
Did you mean:
ost0001_uuid
2013 Apr 29
1
OSTs inactive on one client (only)
Hi everyone,
I have seen this question here before, but without a very
satisfactory answer. One of our half a dozen clients has
lost access to a set of OSTs:
> lfs osts
OBDS::
0: lustre-OST0000_UUID ACTIVE
1: lustre-OST0001_UUID ACTIVE
2: lustre-OST0002_UUID INACTIVE
3: lustre-OST0003_UUID INACTIVE
4: lustre-OST0004_UUID INACTIVE
5: lustre-OST0005_UUID ACTIVE
6: lustre-OST0006_UUID ACTIVE
All OSTs show as completely fine on the other clients, and
the system is working there. In addition, I have run numerous
checks of the IB network (ibhosts, ibping, etc.), and I do not
see any netwo...
2008 Mar 06
2
strange lustre errors
...foreshadow disaster
LustreError: 5080:0:(import.c:607:ptlrpc_connect_interpret())
data4-OST0000_UUID at 192.168.2.98@tcp changed handle from
0xfe51139158c6502c to 0xfe511392a35878c1; copying, but this may
foreshadow disaster
LustreError: 5080:0:(import.c:607:ptlrpc_connect_interpret())
scratch2-OST0003_UUID at 192.168.2.99@tcp changed handle from
0x9ee58a75fddf2834 to 0x9ee58a761d190470; copying, but this may
foreshadow disaster
LustreError: 5080:0:(import.c:607:ptlrpc_connect_interpret())
scratch1-OST0003_UUID at 192.168.2.99@tcp changed handle from
0x9ee58a75fddf2754 to 0x9ee58a761d190462; copyi...
2013 Mar 18
1
lustre showing inactive devices
...274.3M 3.9G 6%
/mnt/lustre[MDT:0]
lustre-OST0000_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:0]
lustre-OST0001_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:1]
lustre-OST0002_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:2]
lustre-OST0003_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:3]
lustre-OST0004_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:4]
lustre-OST0005_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:5]
lustre-OST0006_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre...
2013 Mar 18
1
OST0006 : inactive device
...G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:0]
lustre-OST0001_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:1]
lustre-OST0002_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:2]
lustre-OST0003_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:3]
lustre-OST0004_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:4]
lustre-OST0005_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:5]
lustre-OST0006_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OS...
2014 Nov 13
0
OST acting up
...have an OST that doesn't seem to be written to.
When I check the MDS with 'lctl dl' I do not see that OST in the list.
However when I check the OSS that OST belongs to I can see it is mounted
and up;
0 UP osd-zfs l2-OST0003-osd l2-OST0003-osd_UUID 5
3 UP obdfilter l2-OST0003 l2-OST0003_UUID 5
4 UP lwp l2-MDT0000-lwp-OST0003 l2-MDT0000-lwp-OST0003_UUID 5
Since it isn't written to (the MDS doesn't seem to know about it, I
created a directory. The index of that OST is 3 so I did a "lfs
setstripe -i 3 -c 1 /mnt/l2-lustre/test-37" to force stuff that is
written i...
2010 Jul 08
5
No space left on device on not full filesystem
...000_UUID 43.8G 813.4M 40.5G 1% /mnt/lustre[MDT:0]
lustre-OST0000_UUID 867.7G 45.8G 777.8G 5% /mnt/lustre[OST:0]
lustre-OST0001_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:1]
lustre-OST0002_UUID 916.9G 44.9G 825.5G 4% /mnt/lustre[OST:2]
lustre-OST0003_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:3]
lustre-OST0004_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:4]
lustre-OST0005_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:5]
lustre-OST0006_UUID 916.9G 44.5G 825.9G 4% /mnt/lustre[OST:6]
lustre-OST0007_...
2012 Sep 27
4
Bad reporting inodes free
...70592 20949223 954521369 2%
/mnt/data[MDT:0]
cetafs-OST0000_UUID 19073280 17822213 1251067 93%
/mnt/data[OST:0]
cetafs-OST0001_UUID 19073280 17822532 1250748 93%
/mnt/data[OST:1]
cetafs-OST0002_UUID 19073280 17822560 1250720 93%
/mnt/data[OST:2]
cetafs-OST0003_UUID 19073280 17822622 1250658 93%
/mnt/data[OST:3]
cetafs-OST0004_UUID 19073280 17822181 1251099 93%
/mnt/data[OST:4]
cetafs-OST0005_UUID 19073280 17822769 1250511 93%
/mnt/data[OST:5]
cetafs-OST0006_UUID 19073280 17822378 1250902 93%
/mnt/data[OST:6]...
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual,
and everything mounts fine. Log regeneration apparently works, since
it seems to do something, but
2008 Jan 02
9
lustre quota problems
Hello,
I''ve several problems with quota on our testcluster:
When I set the quota for a person to a given value (e.g. the values which
are provided in the operations manual), I''m able to write exact the amount
which is set with setquota.
But when I delete the files(file) I''m not able to use this space again.
Here is what I''ve done in detail:
lfs checkquota
2013 Feb 12
2
Lost folders after changing MDS
...MDT0000_UUID 37.5G 499.5M 34.5G 1% /data[MDT:0]
AC3-OST0000_UUID 16.4T 2.2T 13.3T 14% /data[OST:0]
AC3-OST0001_UUID 16.4T 1.8T 13.7T 12% /data[OST:1]
AC3-OST0002_UUID 6.4T 6.0T 49.2G 99% /data[OST:2]
AC3-OST0003_UUID 6.4T 6.1T 912.9M 100% /data[OST:3]
AC3-OST0004_UUID 4.3T 4.1T 17.2G 100% /data[OST:4]
AC3-OST0005_UUID 1.9T 1.8T 29.0M 100% /data[OST:5]
AC3-OST0006_UUID 1.9T 1.5T 282.1G 85% /data[OST:6]
AC3-OST0007_UUID...