similar to: Set quota on Lustre system file client, reboots MDS/MGS node

Displaying 20 results from an estimated 900 matches similar to: "Set quota on Lustre system file client, reboots MDS/MGS node"

2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all, Here is the situation: I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs. I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64 I have created 3 zfs pools: 1. mgs: # zpool
2007 Oct 22
0
The mds_connect operation failed with -11
Hi, list: I''m trying configure lustre with: 1 MGS -------------> 192.168.3.100 with mkfs.lustre --mgs /dev/md1 ; mount -t lustre ... 1 MDT ------------> 192.168.3.101 with mkfs.lustre --fsname=datafs00 --mdt --mgsnode=192.168.3.100 /dev/sda3 ; mount -t lustre ... 4 ost -----------> 192.168.3.102-104 with mkfs.lustre --fsname=datafs00 --ost --mgsnode=192.168.3.100 at tcp0
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple ethernet config. with MDT and OST on the same node. Can someone tell me if the following (~150 second recovery occurring when small 190 GB OST is re-mounted) is expected behavior or if I''m missing something? I thought I would send this and continue with the eval while awaiting a response. I''m using
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this: For example: mount -t ldiskfs /dev/old /mnt/ost_old mount -t ldiskfs /dev/new /mnt/ost_new rsync -aSv /mnt/ost_old/ /mnt/ost_new # note trailing slash on ost_old/ If you are unable to connect both
2007 Oct 15
3
iptables rules for lustre 1.6.x and MGS recovery procedures
Hi, I would like to know what TCP/UDP ports should i keep open in my firewall policies on my MGS server such that I can have my MGS server fire-walled. Also if in a event of loss of MGT would it be possible to recreate the MGT without loosing data or bringing the filesystem down (i.e. by using cached information from MDT''s and OST''s) Thanks Anand
2007 Aug 30
2
Announcing an updated document version (v1.7) of the Lustre 1.6 Operations Manual
Dear Lustre users, CFS is pleased to announce an updated document version (v1.7) of the Lustre? 1.6 Operations Manual, available in both PDF and HTML formats at http://www.manual.lustre.org. This edition of the Operations Manual includes the following enhancement: * Addition of mballoc3 content to the Lustre Proc chapter If you have any questions, suggestions, or recommended edits to the
2007 Aug 30
2
Announcing an updated document version (v1.7) of the Lustre 1.6 Operations Manual
Dear Lustre users, CFS is pleased to announce an updated document version (v1.7) of the Lustre? 1.6 Operations Manual, available in both PDF and HTML formats at http://www.manual.lustre.org. This edition of the Operations Manual includes the following enhancement: * Addition of mballoc3 content to the Lustre Proc chapter If you have any questions, suggestions, or recommended edits to the
2007 Aug 30
2
Announcing an updated document version (v1.7) of the Lustre 1.6 Operations Manual
Dear Lustre users, CFS is pleased to announce an updated document version (v1.7) of the Lustre? 1.6 Operations Manual, available in both PDF and HTML formats at http://www.manual.lustre.org. This edition of the Operations Manual includes the following enhancement: * Addition of mballoc3 content to the Lustre Proc chapter If you have any questions, suggestions, or recommended edits to the
2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings! I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it using the standard defaults over TCP/IP. Everything worked very nicely usnig a real, static --mgsnode=a.b.c.x value which was the actual IP of the MGS/MDS system1 node. I am now trying to integrate it with Pacemaker-1.1.7. I believe I have most of the set-up completed with a particular exception. The "lctl
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed
2010 Jul 30
2
lustre 1.8.3 upgrade observations
Hello, 1) when compiling the lustre modules for the server the ./configure script behaves a bit odd. The --enable-server option is silently ignored when the kernel is not 100% patched. Unfortunatly the build works for the server, but during the mount the error message claims about a missing "lustre" module which is loaded and running. What is really missing are the ldiskfs et al
2007 Nov 19
6
Dedicated MGS?
This may be in the documentation. If so, I missed it. If a site has multiple Lustre file systems, the documentation implies that there only needs to be a single MGS for an entire site (regardless of the number of file systems). However, I also know it is fairly common to have a combined MGS/MDT. So here are the questions. 1. If we are going to have several Lustre file systems,
2010 Aug 14
0
Lost OSTs, remounted, now /proc/fs/lustre/obdfilter/$UUID/ is empty
Hello, We had a problem with our disk controller that required a reboot. 2 of our OSTs remounted and went through the recovery window but clients hang trying to access them. Also /proc/fs/lustre/obdfilter/$UUID/ is empty for that OST UUID. LDISKFS FS on dm-5, internal journal on dm-5:8 LDISKFS-fs: delayed allocation enabled LDISKFS-fs: file extents enabled LDISKFS-fs: mballoc enabled
2007 Jan 17
0
Lustre 1.6.0 beta7 is now available
NOTE: BETA SOFTWARE, NOT FOR PRODUCTION USE Cluster File Systems is pleased to announce the next beta version of Lustre 1.6, which includes the following new features: * Dynamic service threads - within a small range, extra service threads are started automatically when the request queue builds up. * Mixed-endian environment fixes * Easy permanent OST removal * MGS failover * MGS proc
2007 Jan 17
0
Lustre 1.6.0 beta7 is now available
NOTE: BETA SOFTWARE, NOT FOR PRODUCTION USE Cluster File Systems is pleased to announce the next beta version of Lustre 1.6, which includes the following new features: * Dynamic service threads - within a small range, extra service threads are started automatically when the request queue builds up. * Mixed-endian environment fixes * Easy permanent OST removal * MGS failover * MGS proc
2010 Sep 18
0
no failover with failover MDS
Hi all, we have two servers A, B as a failover MGS/MDT pair, with IPs A=10.12.112.28 and B=10.12.115.120 over tcp. When server B crashes, MGS and MDT are mounted on A. Recovery times out with only one out of 445 clients recovered. Afterwards, the MDT lists all its OSTs as UP and in the logs of the OSTs I see: Lustre: MGC10.12.112.28 at tcp: Connection restored to service MGS using nid
2010 Aug 11
3
Failure when mounting Lustre
Hi, I get the following error when I try to mount lustre on the clients. Permanent disk data: Target: lustre-OSTffff Index: unassigned Lustre FS: lustre Mount type: ldiskfs Flags: 0x72 (OST needs_index first_time update ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=164.107.119.231 at tcp sh: losetup: command not found mkfs.lustre: error 32512 on losetup:
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options and it would failover between them. 1.6.3 only seems to take the last one and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover to the other node. Any ideas how to get around this? Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University leblanc at
2007 Nov 07
9
How To change server recovery timeout
Hi, Our lustre environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I would like to change recovery timeout from default value 250s to something longer I tried example from manual: set_timeout <secs> Sets the timeout (obd_timeout) for a server to wait before failing recovery. We performed that experiment on our test lustre installation with one OST. storage02 is our OSS [root at
2010 Sep 09
1
What's the correct sequence to umount multiple lustre file system
Any recommendation about the sequence to umount multiple lustre file system with combined MGS/MDT or separate MGS, MDT. Thanks. Ming -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100909/396905b5/attachment.html