similar to: How to bypass failed OST without blocking?

Displaying 20 results from an estimated 2000 matches similar to: "How to bypass failed OST without blocking?"

2007 Nov 07
9
How To change server recovery timeout
Hi, Our lustre environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I would like to change recovery timeout from default value 250s to something longer I tried example from manual: set_timeout <secs> Sets the timeout (obd_timeout) for a server to wait before failing recovery. We performed that experiment on our test lustre installation with one OST. storage02 is our OSS [root at
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi, We run four-node Lustre 2.3, and I needed to both change hardware under MGS/MDS and reassign an OSS ip. Just the same, I added a brand new 10GE network to the system, which was the reason for MDS hardware change. I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual, and everything mounts fine. Log regeneration apparently works, since it seems to do something, but
2007 Nov 23
2
How to remove OST permanently?
All, I''ve added a new 2.2 TB OST to my cluster easily enough, but this new disk array is meant to replace several smaller OSTs that I used to have of which were only 120 GB, 500 GB, and 700 GB. Adding an OST is easy, but how do I REMOVE the small OSTs that I no longer want to be part of my cluster? Is there a command to tell luster to move all the file stripes off one of the nodes?
2008 Jan 15
19
How do you make an MGS/OSS listen on 2 NICs?
I am running on CentOS 5 distribution without adding any updates from CentOS. I am using the lustre 1.6.4.1 kernel and software. I have two NICs that run though different switches. I have the lustre options in my modprobe.conf to look like this: options lnet networks=tcp0(eth1,eth0) My MGS seems to be only listening on the first interface however. When I try and ping the 1st interface (eth1)
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options and it would failover between them. 1.6.3 only seems to take the last one and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover to the other node. Any ideas how to get around this? Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University leblanc at
2010 Jul 13
4
Enable async journals
Hi all, we use SLES 11 and Lustre 1.8.1.1 + patches and like convert a lustre FS using external journals to one with async journals enabled. Question is whether the procedure: umount <filesystem> on all clients umount <osts> on all OSSes e2fsck <ost-device> on all OSSes for all all OSTs tune2fs -O ^has_journal <ost-device> on all
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg
2008 Jan 02
9
lustre quota problems
Hello, I''ve several problems with quota on our testcluster: When I set the quota for a person to a given value (e.g. the values which are provided in the operations manual), I''m able to write exact the amount which is set with setquota. But when I delete the files(file) I''m not able to use this space again. Here is what I''ve done in detail: lfs checkquota
2007 Nov 06
4
Checksum Algorithm
Hi, We have seen a huge performance drop in 1.6.3, due to the checksum being enabled by default. I looked at the algorithm being used, and it is actually a CRC32, which is a very strong algorithm for detecting all sorts of problems, such as single bit errors, swapped bytes, and missing bytes. I''ve been experimenting with using a simple XOR algorithm. I''ve been able to recover
2008 Mar 04
16
Cannot send after transport endpoint shutdown (-108)
This morning I''ve had both my infiniband and tcp lustre clients hiccup. They are evicted from the server presumably as a result of their high load and consequent timeouts. My question is- why don''t the clients re-connect. The infiniband and tcp clients both give the following message when I type "df" - Cannot send after transport endpoint shutdown (-108). I''ve
2007 Nov 19
6
Dedicated MGS?
This may be in the documentation. If so, I missed it. If a site has multiple Lustre file systems, the documentation implies that there only needs to be a single MGS for an entire site (regardless of the number of file systems). However, I also know it is fairly common to have a combined MGS/MDT. So here are the questions. 1. If we are going to have several Lustre file systems,
2010 Aug 06
1
Depreciated client still shown on OST exports
Some clients have been removed several weeks ago but are still listed in: ls -l /proc/fs/lustre/obdfilter/*/exports/ This was found after tracing back mystery tcp packets to the OSS. Although this is causing no damage, it raises the question of when former clients will be cleared from the OSS. Is there a way to manually remove these exports from the OSS? -- Regards, David
2010 Aug 11
3
lfs --obd discrepancy to lctl dl (1.8.3)
Hello, lfs prints different obd(idx) compared to lctl dl. We use single striping. cluster1 tmp # lfs find --obd scia-OST0017_UUID /data/scia/L0/V0.00/20100327/SCI_NL__0PNPDE20100327_193441_000040582088_00071_42209_1158.N1 /data/scia/L0/V0.00/20100327/SCI_NL__0PNPDE20100327_193441_000040582088_00071_42209_1158.N1 cluster1 tmp # lfs getstripe
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0] lustre-OST0000_UUID
2008 Feb 05
2
lctl deactivate questions
Hi; One of our OSTs filled up. Once we realized this, we executed lctl --device 9 deactivate on our fs''s combo MDS/MGS machine. We saw in the syslog that the OST in question was deactivated: Lustre: setting import ufhpc-OST0008_UUID INACTIVE by administrator request However, ''lfs df'' on the clients does not show that the OST is deactivated there, unless we *also*
2007 Nov 29
2
Balancing I/O Load
We are seeing some disturbing (probably due to our ignorance) behavior from lustre 1.6.3 right now. We have 8 OSSs with 3 OSTs per OSS (24 physical LUNs). We just created a brand new lustre file system across this configuration using the default mkfs.lustre formatting options. We have this file system mounted across 400 clients. At the moment, we have 63 IOzone threads running
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
2004 Jan 11
3
Lustre 1.0.2 packages available
Greetings-- Packages for Lustre 1.0.2 are now available in the usual place http://www.clusterfs.com/download.html This bug-fix release resolves a number of issues, of which a few are user-visible: - the default debug level is now a more reasonable production value - zero-copy TCP is now enabled by default, if your hardware supports it - you should encounter fewer allocation failures
2004 Jan 11
3
Lustre 1.0.2 packages available
Greetings-- Packages for Lustre 1.0.2 are now available in the usual place http://www.clusterfs.com/download.html This bug-fix release resolves a number of issues, of which a few are user-visible: - the default debug level is now a more reasonable production value - zero-copy TCP is now enabled by default, if your hardware supports it - you should encounter fewer allocation failures