Displaying 4 results from an estimated 4 matches for "imp_invalid".
Did you mean:
cmd_invalid
2007 Nov 07
1
ll_cfg_requeue process timeouts
Hi,
Our environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp
I am getting following errors from two OSS''s
...
Nov 7 10:39:51 storage09.beowulf.cluster kernel: LustreError:
23045:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID
req at 00000100b410be00 x4190687/t0 o101->MGS at MGC10.143.245.201@tcp_0:26
lens 232/240 ref 1 fl Rpc:/0/0 rc 0/0
Nov 7 10:39:51 storage09.beowulf.cluster kernel: LustreError:
23045:0:(client.c:519:ptlrpc_import_delay_req()) Skipped 119 previous
similar messages
Nov 7 10:50:18 storage...
2007 Oct 25
1
Error message
I''m seeing this error message on one of my OSS''s but not the other
three. Any idea what is causing it?
Oct 25 13:58:56 oss2 kernel: LustreError:
3228:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID
req at f6b13200 x18040/t0 o101->MGS at MGC192.168.0.200@tcp_0:26 lens 176/184
ref 1 fl Rpc:/0/0 rc 0/0
Oct 25 13:58:56 oss2 kernel: LustreError:
3228:0:(client.c:519:ptlrpc_import_delay_req()) Skipped 39 previous
similar messages
Oct 25 14:09:35 oss2 kernel: LustreError:
3228:0:(client.c:519:p...
2008 Jan 10
4
1.6.4.1 - active client evicted
...12:40:38 LustreError: 167-0: This client was evicted by hpfs-MDT0000; in progress operations using this service will fail.
Jan 10 12:40:38 LustreError: 7975:0:(mdc_locks.c:424:mdc_finish_enqueue()) ldlm_cli_enqueue: -5
Jan 10 12:40:38 LustreError: 7975:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID req at ffff8100c5298800 x649493/t0 o101->hpfs-MDT0000_UUID at 130.239.78.233@tcp:12 lens 432/912 ref 1 fl Rpc:/0/0 rc 0/0
Jan 10 12:40:38 LustreError: 7975:0:(mdc_locks.c:424:mdc_finish_enqueue()) ldlm_cli_enqueue: -108
Jan 10 12:41:40 LustreError: 7979:0:(client.c:519:ptlrpc_import_delay_req()...
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg