similar to: bad 1.6.3 striped write performance

Displaying 20 results from an estimated 200 matches similar to: "bad 1.6.3 striped write performance"

2008 Feb 29
2
ptlrpcd
Hi all, anybody knows what ptlrpcd is good for, what it might be doing? I''ve seen it eating 100% CPU on OSS where I reformated an OST, but also in other circumstances. Regards, Thomas
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg
2010 Mar 12
11
Jeremy''s git tags
Hi Everyone, git noob. simple question, how do i get a previous release from jeremy''s git repository? Right now, I can get the latest (2.6.32.9) by doing the following: git clone git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-xen $ cd linux-2.6-xen $ git pull I tried doing a "git tag -l" when in linux-2.6-xen directory, but that did not print the tag
2010 Mar 12
11
Jeremy''s git tags
Hi Everyone, git noob. simple question, how do i get a previous release from jeremy''s git repository? Right now, I can get the latest (2.6.32.9) by doing the following: git clone git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-xen $ cd linux-2.6-xen $ git pull I tried doing a "git tag -l" when in linux-2.6-xen directory, but that did not print the tag
2007 Dec 13
1
MPI-Blast + Lustre
Anyone have any experience with MpiBlast and Lustre. We have MpiBlast-1.4.0-pio and lustre-1.6.3 and we are seeing some pretty poor performance with most of the mpiblast threads spending 20% to 50% of their time in disk wait. We have the genbank nt database split into 24 fragments (one for each of our OSTs, 3 per OSS). The individual fragments are not striped due to the
2010 Sep 03
1
Compiling lustre-client 2.0.0.1 on RHEL 4
Hi, I tried to compile lustre-client 2.0.0.1 on RHEL4 with kernel 2.6.9-89.0.28.EL-x86_64 and I got 3 errors and 1 warning during the compile. The compile is executed with -Werror option, and it fails in all 4 cases * Error: lustre_compat25.h CC [M] /usr/src/redhat/BUILD/lustre-2.0.0.1/lustre/fid/fid_handler.o In file included from
2013 Nov 26
2
Lustre 1.8 client on EL 6.5?
Hello, in preparation for CentOS 6.5 I was trying to build Lustre 1.8 client on CentOS 6.4 updated with 2.6.32-431 kernel. Seems like recent changes in fs.h (/usr/src/kernels/2.6.32-431.el6.x86_64/include/linux/fs.h) is causing problems. make[5]: Entering directory `/usr/src/kernels/2.6.32-431.el6.x86_64'' cc1: warnings being treated as errors
2007 Nov 16
5
Lustre Debug level
Hi, Lustre manual 1.6 v18 says that that in production lustre debug level should be set to fairly low. Manual also says that I can verify that level by running following commands: # sysctl portals.debug This gives ne following error error: ''portals.debug'' is an unknown key cat /proc/sys/lnet/debug gives output: ioctl neterror warning error emerg ha config console cat
2010 Aug 11
3
Failure when mounting Lustre
Hi, I get the following error when I try to mount lustre on the clients. Permanent disk data: Target: lustre-OSTffff Index: unassigned Lustre FS: lustre Mount type: ldiskfs Flags: 0x72 (OST needs_index first_time update ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=164.107.119.231 at tcp sh: losetup: command not found mkfs.lustre: error 32512 on losetup:
2006 Sep 13
0
how to list clones for a snapshot
Hello, Is there a way how to list all clones for given snapshot of a file- system ? e.g. I have the following snapshots: local/testfs at sunday local/testfs at monday local/testfs at tuesday and clone local/tuesday of local/testfs at tuesday. Now I''d like to get local/tuesday using local/testfs at tuesday as input. v.
2016 Jun 30
17
[PATCH v2 00/12] gendisk: Generate uevent after attribute available
The race condition is noticed between disk_add() and disk attributes, on virtio-blk hotplug. Userspace listens to the KOBJ_ADD uevent generated in add_disk(). At that point we haven't created the serial attribute file, therefore depending on how fast udev reacts, the /dev/disk/by-id/ entry doesn't always get created. As pointed out by Christoph Hellwig in the specific fix [1], virtio-blk
2016 Jun 30
17
[PATCH v2 00/12] gendisk: Generate uevent after attribute available
The race condition is noticed between disk_add() and disk attributes, on virtio-blk hotplug. Userspace listens to the KOBJ_ADD uevent generated in add_disk(). At that point we haven't created the serial attribute file, therefore depending on how fast udev reacts, the /dev/disk/by-id/ entry doesn't always get created. As pointed out by Christoph Hellwig in the specific fix [1], virtio-blk
2008 Jun 12
13
Announce: Lustre 1.6.5 is available!
Hi all, At long last, Lustre 1.6.5 is available on the Sun Download Center Site. http://www.sun.com/software/products/lustre/get.jsp The change log and release notes can be read here: http://wiki.lustre.org/index.php?title=Change_Log_1.6 Thank you for your assistance; as always, you can report issues via Bugzilla (https://bugzilla.lustre.org/) Happy downloading! -- The Lustre Team --
2008 Jun 12
13
Announce: Lustre 1.6.5 is available!
Hi all, At long last, Lustre 1.6.5 is available on the Sun Download Center Site. http://www.sun.com/software/products/lustre/get.jsp The change log and release notes can be read here: http://wiki.lustre.org/index.php?title=Change_Log_1.6 Thank you for your assistance; as always, you can report issues via Bugzilla (https://bugzilla.lustre.org/) Happy downloading! -- The Lustre Team --
2012 Aug 31
1
[PATCH V1] NEW API:ext:mke2fs
New api mke2fs for full configuration of filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/ext2.c | 452 +++++++++++++++++++++++++++++++++++++++++ generator/generator_actions.ml | 18 ++ gobject/Makefile.inc | 6 +- src/MAX_PROC_NR | 2 +- 4 files changed, 475 insertions(+), 3 deletions(-) diff --git
2007 Oct 25
1
Error message
I''m seeing this error message on one of my OSS''s but not the other three. Any idea what is causing it? Oct 25 13:58:56 oss2 kernel: LustreError: 3228:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID req at f6b13200 x18040/t0 o101->MGS at MGC192.168.0.200@tcp_0:26 lens 176/184 ref 1 fl Rpc:/0/0 rc 0/0 Oct 25 13:58:56 oss2 kernel: LustreError:
2007 Nov 07
1
ll_cfg_requeue process timeouts
Hi, Our environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I am getting following errors from two OSS''s ... Nov 7 10:39:51 storage09.beowulf.cluster kernel: LustreError: 23045:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID req at 00000100b410be00 x4190687/t0 o101->MGS at MGC10.143.245.201@tcp_0:26 lens 232/240 ref 1 fl Rpc:/0/0 rc 0/0 Nov 7 10:39:51
2008 Mar 25
2
patchless kernel
Dear All, make[5]: Entering directory `/usr/src/kernels/2.6.23.15-80.fc7-x86_64'' /usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:142: warning: ''request_queue_t'' is deprecated /usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:273: warning: ''request_queue_t'' is deprecated /usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:312:
2008 Jan 02
9
lustre quota problems
Hello, I''ve several problems with quota on our testcluster: When I set the quota for a person to a given value (e.g. the values which are provided in the operations manual), I''m able to write exact the amount which is set with setquota. But when I delete the files(file) I''m not able to use this space again. Here is what I''ve done in detail: lfs checkquota
2007 Nov 07
9
How To change server recovery timeout
Hi, Our lustre environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I would like to change recovery timeout from default value 250s to something longer I tried example from manual: set_timeout <secs> Sets the timeout (obd_timeout) for a server to wait before failing recovery. We performed that experiment on our test lustre installation with one OST. storage02 is our OSS [root at