similar to: Large directory performance

Displaying 20 results from an estimated 2000 matches similar to: "Large directory performance"

2007 Nov 07
9
How To change server recovery timeout
Hi, Our lustre environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I would like to change recovery timeout from default value 250s to something longer I tried example from manual: set_timeout <secs> Sets the timeout (obd_timeout) for a server to wait before failing recovery. We performed that experiment on our test lustre installation with one OST. storage02 is our OSS [root at
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any Lustre MDT filesystems in existence that have 2B or more total inodes? This is fairly unlikely, because it would require an MDT filesystem that is > 8TB in size (which isn''t even supported yet) and/or has been formatted with specific options to increase the total number of inodes. This can be checked with
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any Lustre MDT filesystems in existence that have 2B or more total inodes? This is fairly unlikely, because it would require an MDT filesystem that is > 8TB in size (which isn''t even supported yet) and/or has been formatted with specific options to increase the total number of inodes. This can be checked with
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t find a bz for this, but it would appear that tunefs.lustre --print fails on a lustre mdt or ost device if mounted with mmp. Is this expected behavior? TIA mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1 checking for existing Lustre data: not found tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not
2007 Mar 20
15
How to bypass failed OST without blocking?
Hi I want my lustre do such things during OST failed: if some file has stripe data on th failed OST, any operation on the file will return IO error without blocking, and also at this moment I can create and read/write new file or read/write files which have no stripe data on the failed OST without blocking. What should I do ? How to configure? thanks! swin -------------- next part
2008 Feb 22
6
2.6.23 client systems with any compatible server
I want to have a lustre client running on a system with 2.6.23.12 kernel. (The reason is that there is a special patch that is required for these 60+ Quad-Core AMD Opteron systems that we have and the patch is currently only available for this 2.6.23.12 kernel). Does anyone have a recommendation of how I should get a client and then a compatible server? For the server, we only need minimal
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options and it would failover between them. 1.6.3 only seems to take the last one and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover to the other node. Any ideas how to get around this? Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University leblanc at
2007 Nov 19
6
Dedicated MGS?
This may be in the documentation. If so, I missed it. If a site has multiple Lustre file systems, the documentation implies that there only needs to be a single MGS for an entire site (regardless of the number of file systems). However, I also know it is fairly common to have a combined MGS/MDT. So here are the questions. 1. If we are going to have several Lustre file systems,
2010 Jul 08
5
No space left on device on not full filesystem
Hello, We have running lustre 1.8.1 and have met "No space lest on device" error when uploading 500 Gb small files (less then 100 Kb each). The problem seems to depends on the number of files. If we remove one file, we can create one new file, even with Gb size; but if we haven''t remove something we can''t create even very little file, as an example using touch
2013 Apr 16
2
UID/GID access control in Lustre
Hello list members, I started to develop a kernel module which hooks into Lustre 2.3 for controlling data access based on nid and uid/gid. The background is the following: Here at GSI we have currently a reserved uid/gid space which partner institutes are using to access our exported Lustre mounts. However, we currently have no mechanism to control (guaranty) that the reserved uid/gid space are
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg
2010 Jul 01
6
best practice for lustre clustre startup
Hello, I have recently installed a lustre cluster which is in a test phase now but will potentially be in 24x7 production if its accepted. I would like input from the list on what the recommendations/best practices are for configuration of a lustre cluster startup. Is it advisable to have lustre on the various server pieces (mgs/mdt/oss''s) start automatically? If not why not?
2012 Oct 09
1
MDS read-only
Dear all, Two of our MDS have got repeatedly read-only error recently after once e2fsck on lustre 1.8.5. After the MDT mounted for a while, the kernel will reports errors like: Oct 8 20:16:44 mainmds kernel: LDISKFS-fs error (device cciss!c0d1): ldiskfs_ext_check_inode: bad header/extent in inode #50736178: invalid magic - magic 0, entries 0, max 0(0), depth 0(0) Oct 8 20:16:44 mainmds
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton: http://www.linux-mag.com/id/7839 anyone have views on whether this sort of caching would be useful for the MDT? My feeling is that MDT reads are probably pretty random but writes might benefit...? GREG -- Greg Matthews 01235 778658 Senior Computer Systems Administrator Diamond Light Source, Oxfordshire, UK
2010 Jul 20
1
mdt backup tar --xattrs question
Greetings Group! I hope this will be an easy one. To conserve steps in backing up the metadata extended attributes of a Lustre mdt, I am looking at using a newer version of tar combined with its --xattrs option. (Note: Previously I have used the mdt two-step back-up from the Lustre Manual and it has been successful.) If I can backup the extended attributes via tar so that I don''t
2008 Jan 15
19
How do you make an MGS/OSS listen on 2 NICs?
I am running on CentOS 5 distribution without adding any updates from CentOS. I am using the lustre 1.6.4.1 kernel and software. I have two NICs that run though different switches. I have the lustre options in my modprobe.conf to look like this: options lnet networks=tcp0(eth1,eth0) My MGS seems to be only listening on the first interface however. When I try and ping the 1st interface (eth1)
2012 Sep 27
4
Bad reporting inodes free
Hello, When I run a "df -i" in my clients I get 95% indes used or 5% inodes free: Filesystem Inodes IUsed IFree IUse% Mounted on lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95% /mnt/data But if I run lfs df -i i get: UUID Inodes IUsed IFree I
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0] lustre-OST0000_UUID
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed