search for: reclen

Displaying 9 results from an estimated 9 matches for "reclen".

Did you mean: rec_len
2008 Mar 26
0
different read i/o performance for equal guests
...8 768 2 -b---- 64660.8 DomU-4 3 768 2 -b---- 120196.4 To test the I/O I''m using the tool iozone. I''ve ran "iozone -i 0 -i 1 -o -s 40m" in Dom0 and the results are: KB reclen write rewrite read reread 40960 4 5849 20200 1857762 1877786 Then in the DomU-2: KB reclen write rewrite read reread 40960 4 4159 16464 1744170 1821813 Besides the lost performance, the results seem acceptable. Now in DomU-1:...
2009 Apr 15
5
StorageTek 2540 performance radically changed
...Tek 2540 to the latest recommended version and am seeing radically difference performance when testing with iozone than I did in February of 2008. I am using Solaris 10 U5 with all the latest patches. This is the performance achieved (on a 32GB file) in February last year: KB reclen write rewrite read reread 33554432 64 279863 167138 458807 449817 33554432 128 265099 250903 455623 460668 33554432 256 265616 259599 451944 448061 33554432 512 278530 294589 522930 471253 This is the new performa...
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet. #] iozone -r 32k -r 512k -s 8G KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 8388608 32 10559 9792 62435 62260 8388608 512 63012 63409 63409 63138 It seems 32k write/rewrite performance are very poor, which is different f...
2008 Feb 01
2
Un/Expected ZFS performance?
...on the UFS system. I ran some low-level I/O tests (from http://iozone.org/) on my setup and have listed a sampling below for an 8k file and 8k record size: [Hopefully the table formatting survives] UFS filesystem [on local disk] ====================== Run KB reclen write rewrite read reread -------------------------------------------------------------------- 1 8 8 40632 156938 199960 222501 [./iozone -i 0 -i 1 -r 8 -s 8 -> no fsync include] 2 8 8...
2001 Aug 20
1
[tytso@mit.edu: Re: Your ext2 optimisation for readdir+stat]
...====================== RCS file: fs/ext2/RCS/dir.c,v retrieving revision 1.1 diff -u -r1.1 fs/ext2/dir.c --- fs/ext2/dir.c 2001/08/18 11:11:30 1.1 +++ fs/ext2/dir.c 2001/08/18 12:41:10 @@ -303,7 +303,7 @@ const char *name = dentry->d_name.name; int namelen = dentry->d_name.len; unsigned reclen = EXT2_DIR_REC_LEN(namelen); - unsigned long n; + unsigned long start, n; unsigned long npages = dir_pages(dir); struct page *page = NULL; ext2_dirent * de; @@ -311,7 +311,11 @@ /* OFFSET_CACHE */ *res_page = NULL; - for (n = 0; n < npages; n++) { + start = dir->u.ext2_i.i_dir_sta...
2013 Feb 27
1
Slow read performance
Help please- I am running 3.3.1 on Centos using a 10GB network. I get reasonable write speeds, although I think they could be faster. But my read speeds are REALLY slow. Executive summary: On gluster client- Writes average about 700-800MB/s Reads average about 70-80MB/s On server- Writes average about 1-1.5GB/s Reads average about 2-3GB/s Any thoughts? Here are some additional details:
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list, as this matter pops up every now and then in posts on this list I just want to clarify that the real performance of RaidZ (in its current implementation) is NOT anything that follows from raidz-style data efficient redundancy or the copy-on-write design used in ZFS. In a M-Way mirrored setup of N disks you get the write performance of the worst disk and a read performance that is
2009 Jun 15
33
compression at zfs filesystem creation
Hi, I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump? Thanks, ~~sa
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.