Displaying 3 results from an estimated 3 matches for "mdc_enqueu".
Did you mean:
mdc_enqueue
2013 Oct 10
0
read-only on certain client versions
...We have 2.1.6.0 servers with 1.8.9 clients and it seems every 1.8.9 client just flips mounts to read-only (with no actual message until a write is attempted) yet when the OSSs (at 2.1.6.0) mount, they can write all day long.
On write, the 1.8.9 clients log: LustreError: 25284:0:(mdc_locks.c:672:mdc_enqueue()) ldlm_cli_enqueue error: -30
Any ideas?
----------------
John White
HPC Systems Engineer
(510) 486-7307
One Cyclotron Rd, MS: 50C-3209C
Lawrence Berkeley National Lab
Berkeley, CA 94720
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...lm_cancel_list+99}
>>> <ffffffffa02dc113>{:ptlrpc:ldlm_cancel_lru_local+915}
>>> <ffffffffa02ca293>{:ptlrpc:ldlm_resource_putref+435}
>>> <ffffffffa02dc2c9>{:ptlrpc:ldlm_prep_enqueue_req+313}
>>> <ffffffffa0394e6f>{:mdc:mdc_enqueue+1023}
>>> <ffffffffa02c1035>{:ptlrpc:lock_res_and_lock+53}
>>> <ffffffffa0268730>{:obdclass:class_handle2object+224}
>>> <ffffffffa02c5fea>{:ptlrpc:__ldlm_handle2lock+794}
>>> <ffffffffa02c106f>{:ptlrpc:unlock_res_and_lock+...