Displaying 4 results from an estimated 4 matches for "wangdi".
Did you mean:
andi
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg
2007 Dec 13
1
MPI-Blast + Lustre
Anyone have any experience with MpiBlast and Lustre. We have
MpiBlast-1.4.0-pio and lustre-1.6.3 and we are seeing some pretty
poor performance with most of the mpiblast threads spending 20% to
50% of their time in disk wait. We have the genbank nt database
split into 24 fragments (one for each of our OSTs, 3 per OSS). The
individual fragments are not striped due to the
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...>>> <ffffffff8011026a>{system_call+126}
>>>
>>> It seems blocking_ast process was blocked here. Could you dump the
>>> lustre/llite/namei.o by objdump -S lustre/llite/namei.o and send
>>> to me?
>>>
>>> Thanks
>>> WangDi
>>>
>>> Brock Palen wrote:
>>>>>> On Feb 7, 2008, at 11:09 PM, Tom.Wang wrote:
>>>>>>>> MDT dmesg:
>>>>>>>>
>>>>>>>> LustreError: 9042:0:(ldlm_lib.c:1442:target_send_reply_msg())
>>>...
2007 Nov 29
2
Balancing I/O Load
We are seeing some disturbing (probably due to our ignorance)
behavior from lustre 1.6.3 right now. We have 8 OSSs with 3 OSTs
per OSS (24 physical LUNs). We just created a brand new lustre file
system across this configuration using the default mkfs.lustre
formatting options. We have this file system mounted across 400
clients.
At the moment, we have 63 IOzone threads running