Displaying 3 results from an estimated 3 matches for "ldlm_cli_enqueue".
2008 Jan 10
4
1.6.4.1 - active client evicted
...at tcp was lost; in progress operations using this service will wait for recovery to complete.
Jan 10 12:40:38 LustreError: 167-0: This client was evicted by hpfs-MDT0000; in progress operations using this service will fail.
Jan 10 12:40:38 LustreError: 7975:0:(mdc_locks.c:424:mdc_finish_enqueue()) ldlm_cli_enqueue: -5
Jan 10 12:40:38 LustreError: 7975:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID req at ffff8100c5298800 x649493/t0 o101->hpfs-MDT0000_UUID at 130.239.78.233@tcp:12 lens 432/912 ref 1 fl Rpc:/0/0 rc 0/0
Jan 10 12:40:38 LustreError: 7975:0:(mdc_locks.c:424:mdc_finish_enqueue()) l...
2013 Oct 10
0
read-only on certain client versions
...servers with 1.8.9 clients and it seems every 1.8.9 client just flips mounts to read-only (with no actual message until a write is attempted) yet when the OSSs (at 2.1.6.0) mount, they can write all day long.
On write, the 1.8.9 clients log: LustreError: 25284:0:(mdc_locks.c:672:mdc_enqueue()) ldlm_cli_enqueue error: -30
Any ideas?
----------------
John White
HPC Systems Engineer
(510) 486-7307
One Cyclotron Rd, MS: 50C-3209C
Lawrence Berkeley National Lab
Berkeley, CA 94720
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg