Displaying 6 results from an estimated 6 matches for "ost0003".
Did you mean:
ost0001
2014 Nov 13
0
OST acting up
...ss, form right address now:
Hello,
I am using Lustre 2.4.2 and have an OST that doesn't seem to be written to.
When I check the MDS with 'lctl dl' I do not see that OST in the list.
However when I check the OSS that OST belongs to I can see it is mounted
and up;
0 UP osd-zfs l2-OST0003-osd l2-OST0003-osd_UUID 5
3 UP obdfilter l2-OST0003 l2-OST0003_UUID 5
4 UP lwp l2-MDT0000-lwp-OST0003 l2-MDT0000-lwp-OST0003_UUID 5
Since it isn't written to (the MDS doesn't seem to know about it, I
created a directory. The index of that OST is 3 so I did a "lfs
setstripe -...
2013 Apr 29
1
OSTs inactive on one client (only)
Hi everyone,
I have seen this question here before, but without a very
satisfactory answer. One of our half a dozen clients has
lost access to a set of OSTs:
> lfs osts
OBDS::
0: lustre-OST0000_UUID ACTIVE
1: lustre-OST0001_UUID ACTIVE
2: lustre-OST0002_UUID INACTIVE
3: lustre-OST0003_UUID INACTIVE
4: lustre-OST0004_UUID INACTIVE
5: lustre-OST0005_UUID ACTIVE
6: lustre-OST0006_UUID ACTIVE
All OSTs show as completely fine on the other clients, and
the system is working there. In addition, I have run numerous
checks of the IB network (ibhosts, ibping, etc.), and I do not
see any...
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual,
and everything mounts fine. Log regeneration apparently works, since
it seems to do something, but
2013 Feb 12
2
Lost folders after changing MDS
...MDT0000_UUID 37.5G 499.5M 34.5G 1% /data[MDT:0]
AC3-OST0000_UUID 16.4T 2.2T 13.3T 14% /data[OST:0]
AC3-OST0001_UUID 16.4T 1.8T 13.7T 12% /data[OST:1]
AC3-OST0002_UUID 6.4T 6.0T 49.2G 99% /data[OST:2]
AC3-OST0003_UUID 6.4T 6.1T 912.9M 100% /data[OST:3]
AC3-OST0004_UUID 4.3T 4.1T 17.2G 100% /data[OST:4]
AC3-OST0005_UUID 1.9T 1.8T 29.0M 100% /data[OST:5]
AC3-OST0006_UUID 1.9T 1.5T 282.1G 85% /data[OST:6]
AC3-OST0007...
2008 Feb 22
0
lustre error
...am evicting it.
Feb 22 11:21:53 node4 kernel: Lustre: hallmark-OST0004: haven''t heard
from client 9762fef8-bb47-ca87-d2cd-7c439607c523 (at 192.168.0.158 at tcp)
in 212 seconds.
I think it''s dead, and I am evicting it.
Other side:
Feb 22 11:16:21 node3 kernel: Lustre: hallmark-OST0003: haven''t heard
from client 11e65f33-019b-c3cc-17d9-2ccf559a86cd (at 192.168.0.173 at tcp)
in 227 seconds.
I think it''s dead, and I am evicting it.
Feb 22 11:20:13 node3 kernel: LustreError:
16617:0:(ldlm_lib.c:576:target_handle_connect()) @@@ UUID
''hallmark-OST0004_U...
2008 Jan 10
4
1.6.4.1 - active client evicted
...9;'s:
----------------------------8<------------------------
Jan 10 12:20:46 Lustre: hpfs-OST0002: haven''t heard from client c542d305-5995-f79d-1c8d-c9578393358a (at 130.239.78.238 at tcp) in 246 seconds. I think it''s dead, and I am evicting it.
Jan 10 12:20:56 Lustre: hpfs-OST0003: haven''t heard from client c542d305-5995-f79d-1c8d-c9578393358a (at 130.239.78.238 at tcp) in 256 seconds. I think it''s dead, and I am evicting it.
Jan 10 12:42:52 LustreError: 6665:0:(ldlm_lib.c:1442:target_send_reply_msg()) @@@ processing error (-107) req at ffff810081af2400 x6...