Displaying 18 results from an estimated 18 matches for "mgsnode".
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options
and it would failover between them. 1.6.3 only seems to take the last one
and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover
to the other node. Any ideas how to get around this?
Robert
Robert LeBlanc
College of Life Sciences Computer Support
Brigh...
2007 Aug 30
2
Announcing an updated document version (v1.7) of the Lustre 1.6 Operations Manual
Dear Lustre users,
CFS is pleased to announce an updated document version (v1.7) of the
Lustre? 1.6 Operations Manual, available in both PDF and HTML formats at
http://www.manual.lustre.org.
This edition of the Operations Manual includes the following enhancement:
* Addition of mballoc3 content to the Lustre Proc chapter
If you have any questions, suggestions, or recommended edits to the
2007 Aug 30
2
Announcing an updated document version (v1.7) of the Lustre 1.6 Operations Manual
Dear Lustre users,
CFS is pleased to announce an updated document version (v1.7) of the
Lustre? 1.6 Operations Manual, available in both PDF and HTML formats at
http://www.manual.lustre.org.
This edition of the Operations Manual includes the following enhancement:
* Addition of mballoc3 content to the Lustre Proc chapter
If you have any questions, suggestions, or recommended edits to the
2007 Aug 30
2
Announcing an updated document version (v1.7) of the Lustre 1.6 Operations Manual
Dear Lustre users,
CFS is pleased to announce an updated document version (v1.7) of the
Lustre? 1.6 Operations Manual, available in both PDF and HTML formats at
http://www.manual.lustre.org.
This edition of the Operations Manual includes the following enhancement:
* Addition of mballoc3 content to the Lustre Proc chapter
If you have any questions, suggestions, or recommended edits to the
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
...dt0:
# zpool create -f -o ashift=12 -O canmount=off lustre-mdt0 mirror
/dev/disk/by-id/wwn-0x50000c0f01d07a34
/dev/disk/by-id/wwn-0x50000c0f01d110c8
# mkfs.lustre --mdt --fsname=fs0 --servicenode=mds1@tcp0
--servicenode=mds2@tcp0 --param sys.timeout=5000 --backfstype=zfs
--mgsnode=mds1@tcp0 --mgsnode=mds2@tcp0 lustre-mdt0/mdt0
warning: lustre-mdt0/mdt0: for Lustre 2.4 and later, the target
index must be specified with --index
Permanent disk data:
Target: fs0:MDT0000
Index: 0
Lustre FS: fs0
Mount type: zfs
Flags: 0x106...
2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings!
I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it
using the standard defaults over TCP/IP. Everything worked very
nicely usnig a real, static --mgsnode=a.b.c.x value which was the
actual IP of the MGS/MDS system1 node.
I am now trying to integrate it with Pacemaker-1.1.7. I believe I
have most of the set-up completed with a particular exception. The
"lctl ping" command cannot ping the pacemaker IP alias (say a.b.c.d).
The generic pi...
2010 Sep 04
0
Set quota on Lustre system file client, reboots MDS/MGS node
....
I have a problem when I want to implement quota.
My cluster configuration is:
1. one MGS/MDS host (with two devices: sda and sdb,respectively)
with the following commands:
1) mkfs.lustre --mgs /dev/sda
2) mount -t lustre /dev/sda /mnt/mgt
3) mkfs.lustre --fsname=lustre --mdt --mgsnode=<mgs IP at net> --param
mdt.quota_type=ug /dev/sdb
4) mount -t lustre /dev/sdb /mnt/mdt
2. one OSS host (with two devices: sda and sdb as OST targets)
with the following commands:
1) mkfs.lustre --fsname=lustre --ost --mgsnode=<mgs IP at net> --param
ost.quota_type=ug /dev/...
2010 Aug 11
3
Failure when mounting Lustre
Hi,
I get the following error when I try to mount lustre on the clients.
Permanent disk data:
Target: lustre-OSTffff
Index: unassigned
Lustre FS: lustre
Mount type: ldiskfs
Flags: 0x72
(OST needs_index first_time update )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=164.107.119.231 at tcp
sh: losetup: command not found
mkfs.lustre: error 32512 on losetup: Unknown error 32512
mkfs.lustre FATAL: Loop device setup for /dev/sdb failed: Unknown error
32512
Connection to wci70 closed.
mount.lustre: mount /dev/sdb at /tmp/ost failed: Block device required
Do you ne...
2007 Nov 07
9
How To change server recovery timeout
Hi,
Our lustre environment is:
2.6.9-55.0.9.EL_lustre.1.6.3smp
I would like to change recovery timeout from default value 250s to
something longer
I tried example from manual:
set_timeout <secs> Sets the timeout (obd_timeout) for a server
to wait before failing recovery.
We performed that experiment on our test lustre installation with one
OST.
storage02 is our OSS
[root at
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
...1.6.4.2 with the vanilla 2.6.18.8 kernel with
a Scientific Linux 5 (derived from RHEL5) distro with e2fsprogs
1.40.4.cfs1. I''m doing the following:
aaa()
{
set -x
dmesg -c >/dev/null
mkfs.lustre --fsname datafs --mdt --mgs --reformat /dev/sda1
mkfs.lustre --fsname datafs --ost --mgsnode=pool4 at tcp --reformat /
dev/sda2
e2label /dev/sda1
e2label /dev/sda2
mount.lustre /dev/sda1 /mnt/data/mdt
mount.lustre /dev/sda2 /mnt/data/ost0
dmesg -c >dmesg.0
mount.lustre pool4 at tcp:/datafs /mnt/datafs
dmesg -c >dmesg.1
umount /mnt/datafs
umount /mnt/data/ost0
umou...
2007 Oct 22
0
The mds_connect operation failed with -11
Hi, list:
I''m trying configure lustre with:
1 MGS -------------> 192.168.3.100 with mkfs.lustre --mgs /dev/md1 ;
mount -t lustre ...
1 MDT ------------> 192.168.3.101 with mkfs.lustre --fsname=datafs00
--mdt --mgsnode=192.168.3.100 /dev/sda3 ; mount -t lustre ...
4 ost -----------> 192.168.3.102-104 with mkfs.lustre --fsname=datafs00
--ost --mgsnode=192.168.3.100 at tcp0 /dev/sda3 ; mount -t lustre.....
foreach node
But when I try mount from any node:
LOG IN NODE:
LustreError: 4743:0:(obd_mount.c:1927:...
2010 Jul 30
2
lustre 1.8.3 upgrade observations
...h an unpatched kernel;
configure: WARNING: disabling server build
checking whether to enable pinger support... yes
<snap>
Used SuSE-2.6-sles11 2.6.27.39-0.3 kernel source from the lustre site.
2) tunefs.lustre fails on MDS/MDT and OSS/OST.
<snip>
tunefs.lustre --ost --fsname=lustre --mgsnode=mds1 at tcp0 --verbose --param="failover.mode=failout" /dev/sdb
checking for existing Lustre data: found CONFIGS/mountdata
Reading CONFIGS/mountdata
Read previous values:
Target: lustre-OST0027
Index: 39
Lustre FS: lustre
Mount type: ldiskfs
Flags: 0x2
(OS...
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
...;> > >
> >> > > I make and mount the mdt with (which has both IB and Ethernet, subnet
> >> > > 36.122.x.x is IB, 36.121.x.x is Ethernet):
> >> > >
> >> > > # mkfs.lustre --mdt --mgs
> >> > > --mgsnode="36.122.255.201 at o2ib0,36.121.255.201 at tcp0" <... > /dev/md0
> >> > > # mount -t lustre /dev/md0 /lfs/mdtb
> >> > >
> >> > > But, at this point, the ksocklnd module is loaded rather than the
> >> > > ko...
2010 Jun 22
7
lnet infiniband config
Hi all,
I''m getting my feet wet in the infiniband lake and of course I run into
some problems.
It would seem I got the compilation part of sles11 kernel 2.6.27 +
Lustre 1.8.3 + ofed 1.4.2 right, because it allows me to see and use the
infiniband fabric, and because ko2iblnd loads without any complaints.
In /etc/modprobe.d/lustre (this is a Debian system, hence this subdir of
2013 Sep 19
0
Files written to an OST are corrupted
...ell''s OMSA was used to do a complete,
slow initialization of the volume. No further action was taken until that process
completed.
The index, inode count, and stripe are taken from the files above (not shown in
this email) when the volumes were first created.
309 mkfs.lustre --ost --mgsnode=10.10.1.140@tcp0 --fsname=umt3 --reformat --index=35 \
--mkfsoptions="-i 2000000" --reformat --mountfsoptions="errors=remount-ro,extents,mballoc,stripe=256" /dev/sdc
The UUID here is taken from the /etc/fstab, where the entry has been commented
out until we are ready to again u...
2013 Feb 12
2
Lost folders after changing MDS
...-e base64 -d . > /tmp/mdsea; \<copy all MDS files as above>; cd /mnt/mds_new; setfattr \--restore=/tmp/mdsea
Run on all nodes (clients and OSS''s & MDS / MGS) to change MGS (where mgs is the name of the server and /dev/sdb1 is the name of the unmounted oat)
tunefs.lustre --mgsnode=mgs at tcp0 /dev/sdb1
tunefs.lustre --erase-param --mgsnode=192.168.16.3 at tcp --writeconf /dev/sdc1
After copying with rsync, I had to
cd /srv/mdt;
rm CATALOGS OBJECTS/*
on the new MDT partition.
I also upgraded from 1.8.8 to 2. I managed to mount the Lustre filesystem and if I do lfs df...
2008 Jan 02
9
lustre quota problems
Hello,
I''ve several problems with quota on our testcluster:
When I set the quota for a person to a given value (e.g. the values which
are provided in the operations manual), I''m able to write exact the amount
which is set with setquota.
But when I delete the files(file) I''m not able to use this space again.
Here is what I''ve done in detail:
lfs checkquota
2008 Jan 15
19
How do you make an MGS/OSS listen on 2 NICs?
I am running on CentOS 5 distribution without adding any updates from CentOS. I am using the lustre 1.6.4.1 kernel and software.
I have two NICs that run though different switches.
I have the lustre options in my modprobe.conf to look like this:
options lnet networks=tcp0(eth1,eth0)
My MGS seems to be only listening on the first interface however.
When I try and ping the 1st interface (eth1)