similar to: Read ahead / prefetching

Displaying 20 results from an estimated 8000 matches similar to: "Read ahead / prefetching"

2007 Mar 20
15
How to bypass failed OST without blocking?
Hi I want my lustre do such things during OST failed: if some file has stripe data on th failed OST, any operation on the file will return IO error without blocking, and also at this moment I can create and read/write new file or read/write files which have no stripe data on the failed OST without blocking. What should I do ? How to configure? thanks! swin -------------- next part
2006 Mar 17
1
[RFC] mke2fs with DIR_INDEX, RESIZE_INODE by default
I've been thinking recently that we should re-enable DIR_INDEX in mke2fs by default. When it first came out, we had done this and were bitten by a few bugs in the code. However, this code has been in heavy use for several thousand filesystem years in Lustre, if not elsewhere, and I'm inclined to think it is pretty safe these days. Likewise, RHEL/FC have had RESIZE_INODE as a standard
2010 Aug 17
18
write RPC & congestion
Hi, thanks for previous help. I have some question about Lustre RPC and the sequence of events that occur during large concurrent write() involving many processes and large data size per process. I understand there is a mechanism of flow control by credits, but I''m a little unclear on how it works in general after reading the "networking & io protocol" white paper. Is
2007 Nov 26
15
bad 1.6.3 striped write performance
Hi, I''m seeing what can only be described as dismal striped write performance from lustre 1.6.3 clients :-/ 1.6.2 and 1.6.1 clients are fine. 1.6.4rc3 clients (from cvs a couple of days ago) are also terrible. the below shows that the OS (centos4.5/5) or fabric (gigE/IB) or lustre version on the servers doesn''t matter - the problem is with the 1.6.3 and 1.6.4rc3 client kernels
2010 Jul 05
4
Adding OST to online Lustre with quota
Hello, we wounder whether is it possible to add OSTs to the Lustre with quota support without making it offline? We tried to do this but all quota information was lost. Despite the fact that OST was formatted with quota support we are receiving this error message: Lustre: 3743:0:(lproc_quota.c:447:lprocfs_quota_wr_type()) lustrefs-OST0016: quotaon failed because quota files
2014 Mar 08
2
Re: questions regarding file-system optimization for sortware-RAID array
Andreas, why is it relevant only in case of RAID5 or RAID6? regards, Martin On Fri, Mar 7, 2014 at 5:57 PM, Andreas Dilger <adilger@dilger.ca> wrote: > Note that stride and stripe width only make sense for RAI-5/6 arrays. > For RAID-1 it doesn't really matter. > > Cheers, Andreas > >> On Mar 6, 2014, at 13:46, Martin T <m4rtntns@gmail.com> wrote: >>
2008 Feb 14
9
how do you mount mountconf (i.e. 1.6) lustre on your servers?
As any of you using version 1.6 of Lustre knows, Lustre servers can now be started simply my mounting the devices it is using. Even an /etc/fstab entry can be used if you can have the mount delayed until the network is started. Given this change, you have also notices that we have eliminated the initscript for Lustre that used to exist for releases prior to 1.6. I''d like to take a
2010 Aug 11
3
Failure when mounting Lustre
Hi, I get the following error when I try to mount lustre on the clients. Permanent disk data: Target: lustre-OSTffff Index: unassigned Lustre FS: lustre Mount type: ldiskfs Flags: 0x72 (OST needs_index first_time update ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=164.107.119.231 at tcp sh: losetup: command not found mkfs.lustre: error 32512 on losetup:
2014 Mar 08
0
Re: questions regarding file-system optimization for sortware-RAID array
The stripe and stride options do two things: - shift block and inode bitmaps in each group to be on different disks - align the block allocation to the stripe and stride boundaries to avoid read-modify-write in RAID The first one is irrelevant if the flex_bg option is used, since it already packs the bitmaps together and achieves the same effect. The second is meaningless for RAID-1 since
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t find a bz for this, but it would appear that tunefs.lustre --print fails on a lustre mdt or ost device if mounted with mmp. Is this expected behavior? TIA mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1 checking for existing Lustre data: not found tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not
2010 Aug 12
3
How to track down a latency/timing problem
Hello Lustre Experts I am trying to solve a problem with very slow "ls" and other big amount of file operations but good overall read/write rates. We are running a small cluster of 3 OSSs with 9 OSTs, 1MDS (with SSD MDT) and currently two clients. All server nodes are centos 5.2 with lustre 1.8.1 while the clients are centos 5.4 with lustre 1.8.3. All components are networked with DDR
2007 Jun 05
1
Calculating stride values?
All, I have a question about calculating the value for the -E stride option to mke2fs. The mke2fs man page says stride=stripe-size Configure the filesystem for a RAID array with stripe-size filesystem blocks per stripe. So stride = size of stripe/blocksize. The size of a stripe is the RAID chunk size * the number of drives in the RAID. My question: are parity disks
2008 Feb 22
6
2.6.23 client systems with any compatible server
I want to have a lustre client running on a system with 2.6.23.12 kernel. (The reason is that there is a special patch that is required for these 60+ Quad-Core AMD Opteron systems that we have and the patch is currently only available for this 2.6.23.12 kernel). Does anyone have a recommendation of how I should get a client and then a compatible server? For the server, we only need minimal
2014 Mar 06
2
questions regarding file-system optimization for sortware-RAID array
Hi, I created a RAID1 array of two physical HDD's with chunk size of 64KiB under Debian "wheezy" using mdadm. As a next step, I would like to create an ext3(or ext4) file-system to this RAID1 array using mke2fs utility. According to RAID-related tutorials, I should create the file-system like this: # mkfs.ext3 -v -L myarray -m 0.5 -b 4096 -E stride=16,stripe-width=32 /dev/md0
2008 Mar 03
1
Quota setup fails because of OST ordering
Hi all, after installing a Lustre test file system consisting of 34 OSTs, I encountered a strange error when trying to set up quotas: lfs quotacheck gave me an "Input/Output error", while in /var/log/kern.log I found a Lustre error LustreError: 20807:0:(quota_check.c:227:lov_quota_check()) lov idx 32 inactive Indeed, in /proc/fs/lustre/lov/.../target_obd all 34 OSTs were listed
2014 Mar 07
0
Re: questions regarding file-system optimization for sortware-RAID array
Note that stride and stripe width only make sense for RAI-5/6 arrays. For RAID-1 it doesn't really matter. Cheers, Andreas > On Mar 6, 2014, at 13:46, Martin T <m4rtntns@gmail.com> wrote: > > Hi, > > I created a RAID1 array of two physical HDD's with chunk size of 64KiB under Debian "wheezy" using mdadm. As a next step, I would like to create an ext3(or
2013 Aug 25
2
Loop device performance
Hello, I have a production script that do read operations to a lot of small files. I read that one can gain performance boost with small files by using a loop device on top of Lustre. So a created 500 GB file striped across all of my OSTs(which are 8). I formatted the file with ext2 fs, and mounted it on a client. Just for the sake of testing a simple bash script finds all files with a given file
2008 Feb 07
2
Lustre behaviour when multiple network paths are available?
Hi there, When Lustre is configured in an environment where there are multiple paths to the same destination of the same length (i.e. two paths, each one hop away), which path(s) will be used for sending and receiving data? I have my cluster configured with two OSTs with two GigE NICs in each. I am seeing identical performance metrics when I use LACP to aggregate, and when I use two separate
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple ethernet config. with MDT and OST on the same node. Can someone tell me if the following (~150 second recovery occurring when small 190 GB OST is re-mounted) is expected behavior or if I''m missing something? I thought I would send this and continue with the eval while awaiting a response. I''m using
2012 Sep 27
4
Bad reporting inodes free
Hello, When I run a "df -i" in my clients I get 95% indes used or 5% inodes free: Filesystem Inodes IUsed IFree IUse% Mounted on lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95% /mnt/data But if I run lfs df -i i get: UUID Inodes IUsed IFree I