similar to: Announcing an updated document version (v1.7) of the Lustre 1.6 Operations Manual

Displaying 20 results from an estimated 200 matches similar to: "Announcing an updated document version (v1.7) of the Lustre 1.6 Operations Manual"

2006 May 19
2
Limitation of storage size.
I want to config one 50T OST stroage, is it okay? Since I know that the limitation of ext3 is 1XT. Thnaks
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple ethernet config. with MDT and OST on the same node. Can someone tell me if the following (~150 second recovery occurring when small 190 GB OST is re-mounted) is expected behavior or if I''m missing something? I thought I would send this and continue with the eval while awaiting a response. I''m using
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options and it would failover between them. 1.6.3 only seems to take the last one and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover to the other node. Any ideas how to get around this? Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University leblanc at
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all, Here is the situation: I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs. I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64 I have created 3 zfs pools: 1. mgs: # zpool
2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings! I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it using the standard defaults over TCP/IP. Everything worked very nicely usnig a real, static --mgsnode=a.b.c.x value which was the actual IP of the MGS/MDS system1 node. I am now trying to integrate it with Pacemaker-1.1.7. I believe I have most of the set-up completed with a particular exception. The "lctl
2010 Aug 11
3
Failure when mounting Lustre
Hi, I get the following error when I try to mount lustre on the clients. Permanent disk data: Target: lustre-OSTffff Index: unassigned Lustre FS: lustre Mount type: ldiskfs Flags: 0x72 (OST needs_index first_time update ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=164.107.119.231 at tcp sh: losetup: command not found mkfs.lustre: error 32512 on losetup:
2007 Nov 07
9
How To change server recovery timeout
Hi, Our lustre environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I would like to change recovery timeout from default value 250s to something longer I tried example from manual: set_timeout <secs> Sets the timeout (obd_timeout) for a server to wait before failing recovery. We performed that experiment on our test lustre installation with one OST. storage02 is our OSS [root at
2007 Dec 23
1
rsync du (was rsync delete)
On Sat, 2007-12-22 at 18:47 -0800, Jesse Thompson wrote: > Now I'm interested in a new possibility however. Using rsync > (connecting to a remote rsync server via rsync protocol) is there a > way to measure the size of a directory, kind of like du, without > having to transfer it? Yes. Do a transfer of the directory in dry-run mode (so no data is actually copied) and pass
2006 Aug 23
8
acts_as_ferret with Mongrel and Edge Rails
Hi there, Has anyone tried acts_as_ferret with Edge Rails and Mongrel? When I install the plugin to a project that has Edge Rails frozen, and the Mongrel gem installed, I can''t start the server. There''s no error, it just doesn''t start. I''ve used acts_as_ferret in the past with WEBrick, and stable Rails releases without a hitch. If I remove the
2013 Jul 18
3
setdiff y/o intersect para diferencias entre vectores
hola, tengo dos vectores de 1134 y 385 elementos que se corresponden con números de accesión de genes. Necesito saber que números son comunes o intersección de vectores. He utilizado intersect(x,y) y me da todo el rato un único valor: V1 V1.1 V1.2 V1.3 V1.4 V1.5 V1.6 V1.7 V1.8 V1.9 V1.10 V1.11 V1.12 1 AJ558305 AJ558305 AJ558305 AJ558305 AJ558305
2010 Sep 04
0
Set quota on Lustre system file client, reboots MDS/MGS node
Hi I used lustre-1.8.3 for Centos5.4. I patched the kernel according to Lustre 1.8 operations manual.pdf. I have a problem when I want to implement quota. My cluster configuration is: 1. one MGS/MDS host (with two devices: sda and sdb,respectively) with the following commands: 1) mkfs.lustre --mgs /dev/sda 2) mount -t lustre /dev/sda /mnt/mgt 3) mkfs.lustre --fsname=lustre
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed
2009 Jun 18
3
Possible Bug or limitation in Cygwin 1.7 and Rsync and file number limit
I'm not sure if this is a bug or a limitation that can be worked around with a setting somewhere, but I have found a problem with cygwin 1.7 while using rsync. I have been using rsync and cygwin v1.5 for quite some time. I recently started testing cygwin v1.7 and I ran into a problem with an apparent limit on the number of files cygwin 1.7 along with rsync can process in a directory. First
2010 Jun 22
7
lnet infiniband config
Hi all, I''m getting my feet wet in the infiniband lake and of course I run into some problems. It would seem I got the compilation part of sles11 kernel 2.6.27 + Lustre 1.8.3 + ofed 1.4.2 right, because it allows me to see and use the infiniband fabric, and because ko2iblnd loads without any complaints. In /etc/modprobe.d/lustre (this is a Debian system, hence this subdir of
2008 Jan 02
9
lustre quota problems
Hello, I''ve several problems with quota on our testcluster: When I set the quota for a person to a given value (e.g. the values which are provided in the operations manual), I''m able to write exact the amount which is set with setquota. But when I delete the files(file) I''m not able to use this space again. Here is what I''ve done in detail: lfs checkquota
2013 May 13
5
Serial Passthrough broken in Debian Wheezy?
Hello, I just discovered a strange bug with serial passthrough in xen 4.1 on Debian Wheezy. The Dom0 has a GSM modem connected to serial port. The serial port is passed through to a DomU with options 'irq = [ 4 ]' and 'ioports = [ '3f8-3ff ]'. This worked as expected on Debian Squeeze with Xen 4.0 and Linux kernel 2.6.32 (both for Dom0 and DomU). On Debian Wheezy with Xen
2013 May 13
5
Serial Passthrough broken in Debian Wheezy?
Hello, I just discovered a strange bug with serial passthrough in xen 4.1 on Debian Wheezy. The Dom0 has a GSM modem connected to serial port. The serial port is passed through to a DomU with options 'irq = [ 4 ]' and 'ioports = [ '3f8-3ff ]'. This worked as expected on Debian Squeeze with Xen 4.0 and Linux kernel 2.6.32 (both for Dom0 and DomU). On Debian Wheezy with Xen
2008 Feb 14
9
how do you mount mountconf (i.e. 1.6) lustre on your servers?
As any of you using version 1.6 of Lustre knows, Lustre servers can now be started simply my mounting the devices it is using. Even an /etc/fstab entry can be used if you can have the mount delayed until the network is started. Given this change, you have also notices that we have eliminated the initscript for Lustre that used to exist for releases prior to 1.6. I''d like to take a
2007 Oct 22
0
The mds_connect operation failed with -11
Hi, list: I''m trying configure lustre with: 1 MGS -------------> 192.168.3.100 with mkfs.lustre --mgs /dev/md1 ; mount -t lustre ... 1 MDT ------------> 192.168.3.101 with mkfs.lustre --fsname=datafs00 --mdt --mgsnode=192.168.3.100 /dev/sda3 ; mount -t lustre ... 4 ost -----------> 192.168.3.102-104 with mkfs.lustre --fsname=datafs00 --ost --mgsnode=192.168.3.100 at tcp0
2010 Jul 30
2
lustre 1.8.3 upgrade observations
Hello, 1) when compiling the lustre modules for the server the ./configure script behaves a bit odd. The --enable-server option is silently ignored when the kernel is not 100% patched. Unfortunatly the build works for the server, but during the mount the error message claims about a missing "lustre" module which is loaded and running. What is really missing are the ldiskfs et al