Displaying 6 results from an estimated 6 matches for "ldlm_lib".
Did you mean:
drm_lib
2008 Feb 22
0
lustre error
...: LustreError:
4567:0:(acceptor.c:442:lnet_acceptor()) Error -11 reading connection
request from 192.168.0.120
Feb 22 03:26:13 node4 kernel: LustreError:
4567:0:(acceptor.c:442:lnet_acceptor()) Error -11 reading connection
request from 192.168.0.11
Feb 22 03:26:14 node4 kernel: Lustre:
4816:0:(ldlm_lib.c:497:target_handle_reconnect()) hallmark-OST0004:
2a02ce4a-c2cf-36f6-1cf1-82a5c4b22459 reconnecting
Feb 22 03:26:14 node4 kernel: Lustre:
4671:0:(ldlm_lib.c:497:target_handle_reconnect()) hallmark-OST0004:
3e64ed95-8693-9c34-a32e-b803bda9017c reconnecting
Feb 22 03:26:29 node4 kernel: Lustre:...
2010 Aug 14
0
Lost OSTs, remounted, now /proc/fs/lustre/obdfilter/$UUID/ is empty
...e scratch-OST0007, 281 recoverable clients, 0 delayed clients,
last_rcvd 55834575088
Lustre: scratch-OST0007: Now serving scratch-OST0007 on
/dev/mapper/ost_scratch_7 with recovery enabled
Lustre: scratch-OST0007: Will be in recovery for at least 5:00, or
until 281 clients reconnect
Lustre: 6799:0:(ldlm_lib.c:1788:target_queue_last_replay_reply())
scratch-OST0007: 280 recoverable clients remain
Lustre: 6799:0:(ldlm_lib.c:1788:target_queue_last_replay_reply())
Skipped 279 previous similar messages
Lustre: scratch-OST0007.ost: set parameter quota_type=ug
Lustre: 7305:0:(ldlm_lib.c:1788:target_queue_last...
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
...255.1 at o2ib [8/64]
Lustre: Lustre Client File System; info at clusterfs.com
LustreError: 11043:0:(events.c:401:ptlrpc_uuid_to_peer()) No NID found
for 36.121.255.201 at tcp
LustreError: 11043:0:(client.c:58:ptlrpc_uuid_to_connection()) cannot
find peer 36.121.255.201 at tcp!
LustreError: 11043:0:(ldlm_lib.c:312:client_obd_setup()) can''t add
initial connection
LustreError: 11043:0:(obd_config.c:325:class_setup()) setup
ddnlfs-MDT0000-mdc-0000010430934400 failed (-2)
LustreError: 11043:0:(obd_config.c:1062:class_config_llog_handler())
Err -2 on cfg command:
LustreError: 11141:0:(connection.c:...
2008 Jan 10
4
1.6.4.1 - active client evicted
...542d305-5995-f79d-1c8d-c9578393358a (at 130.239.78.238 at tcp) in 271 seconds. I think it''s dead, and I am evicting it.
Jan 10 12:40:38 LustreError: 27332:0:(handler.c:1502:mds_handle()) operation 101 on unconnected MDS from 12345-130.239.78.238 at tcp
Jan 10 12:40:38 LustreError: 27332:0:(ldlm_lib.c:1442:target_send_reply_msg()) @@@ processing error (-107) req at ffff8100e20ffe00 x649491/t0 o101-><?>@<?>:-1 lens 512/0 ref 0 fl Interpret:/0/0 rc -107/0
Jan 10 12:47:26 Lustre: hpfs-MDT0000: haven''t heard from client c542d305-5995-f79d-1c8d-c9578393358a (at 130.239.78.2...
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...>>>
>>> Thanks
>>> WangDi
>>>
>>> Brock Palen wrote:
>>>>>> On Feb 7, 2008, at 11:09 PM, Tom.Wang wrote:
>>>>>>>> MDT dmesg:
>>>>>>>>
>>>>>>>> LustreError: 9042:0:(ldlm_lib.c:1442:target_send_reply_msg())
>>>>>>>> @@@ processing error (-107) req at 000001002b
>>>>>>>> 52b000 x445020/t0 o400-><?>@<?>:-1 lens 128/0 ref 0 fl
>>>>>>>> Interpret:/0/0 rc -107/0
>>>>>>...