Displaying 9 results from an estimated 9 matches for "el5_lustr".
Did you mean:
el5_lustre
2008 Apr 15
5
o2ib module prevents shutdown
...unload rdma_cm
Failed to unload rdma_cm
Failed to unload ib_cm
Failed to unload ib_sa
LustreError: 131-3: Received notification of device removal
Please shutdown LNET to allow this to proceed
This happens on server and client nodes alike. We run RHEL5.1 and
OFED 1.2, kernel 2.6.18-53.1.13.el5_lustre.1.6.4.3smp from CFS/Sun.
I narrowed it down to module ko2iblnd, which I attempt to remove
first (added to PRE_UNLOAD_MODULES in /etc/init.d/openibd), but it
doesn''t work. Strangely, in "lsmod" the use count of the module is
one, but I don''t see where it''...
2007 Nov 26
15
bad 1.6.3 striped write performance
...ites are a tad slower too).
with 1M lustre stripes:
client client dd write speed (MB/s)
OS kernel a) b) c) d)
1.6.2:
centos4.5 2.6.9-55.0.2.EL_lustre.1.6.2smp 202 270 118 117
centos5 2.6.18-8.1.8.el5_lustre.1.6.2rjh 166 190 117 119
1.6.3+:
centos4.5 2.6.9-55.0.9.EL_lustre.1.6.3smp 32 9 30 9
centos5 2.6.18-53.el5-lustre1.6.4rc3rjh 36 10 27 10
^^^^ ^^^^
y...
2008 Mar 04
16
Cannot send after transport endpoint shutdown (-108)
This morning I''ve had both my infiniband and tcp lustre clients hiccup. They are evicted from the server presumably as a result of their high load and consequent timeouts. My question is- why don''t the clients re-connect. The infiniband and tcp clients both give the following message when I type "df" - Cannot send after transport endpoint shutdown (-108). I''ve
2010 Aug 27
6
Samba and file locking
Are their issues with Samba and Lustre working together? I remember
something about turning oplocks off in samba, and while testing samba
I noticed this
[2010/08/27 17:30:59, 3] lib/util.c:fcntl_getlock(2064)
fcntl_getlock: lock request failed at offset 75694080 count 65536
type 1 (Function not implemented)
But I also found out about the flock option for lustre. Should I set
flock on all
2010 Aug 11
0
OSS: IMP_CLOSED errors
Hello.
OS CentOS 5.4
uname -a
Linux oss0 2.6.18-128.7.1.el5_lustre.1.8.1.1 #1 SMP Tue Oct 6 05:48:57 MDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Lustre 1.8.1.1
OSS server.
A lot of errors in /var/log/messages:
Aug 10 14:46:34 oss0 kernel: LustreError: 2802:0:(client.c:775:ptlrpc_import_delay_req()) Skipped 1 previous similar message
Aug 10 15:07:01 oss0 kernel: L...
2010 Jul 07
0
How to evict a dead client?
..._hash(U) dm_mem_cache(U) dm_snapshot(U) dm_zero(U) dm_mirror(U) dm_log(U) dm_mod(U) usb_storage(U) lpfc(U) scsi_transport_fc(U) cciss(U) sd_mod(U) scsi_mod(U) ext3(U) jbd(U) uhci_hcd(U) ohci_hcd(U) ehci_hcd(U)
Jul 7 14:45:11 com01 kernel: Pid: 12180, comm: ll_ost_118 Tainted: G M 2.6.18-128.7.1.el5_lustre.1.8.1.1 #1Jul 7 14:45:11 com01 kernel: RIP: 0010:[<ffffffff8006dce9>] [<ffffffff8006dce9>] do_gettimeoffset_tsc+0x8/0x39
Jul 7 14:45:11 com01 kernel: RSP: 0018:ffff8102797b92c0 EFLAGS: 00000202
Jul 7 14:45:11 com01 kernel: RAX: 00000000000106a5 RBX: ffff8102797b9300 RCX: 000000000...
2008 Dec 24
6
Bug when using /dev/cciss/c0d2 as mdt/ost
I am trying to build lustre-1.6.6 against the pre-patched kernel downloaded
from SUN.
But as written in Operations manual, it creates rpms for
2.6.18-92.1.10.el5_lustrecustom. Is there a way to ask it not to append
custom as extraversion.
Running kernel is 2.6.18-92.1.10.el5_lustre.1.6.6smp.
--
Regards--
Rishi Pathak
National PARAM Supercomputing Facility
Center for Development of Advanced Computing(C-DAC)
Pune University Campus,Ganesh Khind Road
Pune-Maharastra...
2007 Nov 23
2
How to remove OST permanently?
...to be part of my cluster? Is there a command to tell luster
to move all the file stripes off one of the nodes?
[lab01]/proc/fs/lustre> cat version
lustre: 1.6.3
kernel: 47
build: 1.6.3-19691231190000-PRISTINE-.cache.build.BUILD.lustre-kernel-
2.6.18.lustre.linux-2.6.18-8.1.14.el5_lustre.1.6.3smp
-- Dante
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space