similar to: IOZone: Number of outstanding requests..

Displaying 20 results from an estimated 9000 matches similar to: "IOZone: Number of outstanding requests.."

2011 Jan 08
1
how to graph iozone output using OpenOffice?
Hi all, Can anyone please steer me in the right direction with this one? I've searched the net, but couldn't find a clear answer. How do I actually generate graphs from iozone, using OpenOffice? Every website I've been to simply mentions that iozone can output an xls file which can be used in MS Excel to generate a 3D graph. But, I can't see how it's actually done. Can anyone
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone [1] on it results in an oops [2]. remove_suid is called, accessing offset 14 of a NULL pointer. Let me know if you''d like me to test any fix, do further debugging or get more information. Thanks, Daniel --- [1] # mkfs.btrfs /dev/sda4 # mount /dev/sda4 /mnt /mnt# iozone -a . --- [2] [ 899.118926] BUG: unable to
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL, IHAC who would like to use Sun Fire X4500 to be the NFS server for the backend services, and would like to see the potential performance gain comparing to their existing systems. However the outputs of the I/O stress test with iozone show the mixed results as follows: * The read performance sharply degrades (almost down to 1/20, i.e from 2,000,000 down to 100,000) when the
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2008 Feb 02
17
New binary release of GPL PV drivers for Windows
I''ve just uploaded a new binary release of the GPL PV drivers for Windows - release 0.6.3. Fixes in this version are: . Should now work on any combination of front and backend bit widths (32, 32p, 64). Previous crashes that I thought were due to Intel arch were actually due to this problem. . The vbd driver will now not enumerate ''boot'' disks (eg those normally serviced
2008 Feb 02
17
New binary release of GPL PV drivers for Windows
I''ve just uploaded a new binary release of the GPL PV drivers for Windows - release 0.6.3. Fixes in this version are: . Should now work on any combination of front and backend bit widths (32, 32p, 64). Previous crashes that I thought were due to Intel arch were actually due to this problem. . The vbd driver will now not enumerate ''boot'' disks (eg those normally serviced
2014 Dec 03
2
Problem with AIO random read
Hello list, I setup Iometer to test AIO for 100% random read. If "Transfer Request Size" is more than or equal to 256 kilobytes,in the beginning the transmission is good. But 3~5 seconds later,the throughput will drop to zero. Server OS: Ubuntu Server 14.04.1 LTS Samba: Version 4.1.6-Ubuntu Dialect: SMB 2.0 AIO settings : aio read size = 1 aio write size = 1 vfs objects =
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet. #] iozone -r 32k -r 512k -s 8G KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 8388608 32 10559 9792 62435 62260 8388608 512 63012 63409 63409 63138 It seems 32k write/rewrite performance are very
2010 Dec 02
3
Performance testing tools for Windows guests
Hi all, could you please point me to performance testing tools for Windows guests, mainly to see what their performance is for local storage. thx! B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2007 Nov 19
0
Solaris 8/07 Zfs Raidz NFS dies during iozone test on client host
Hi, Well I have a freshly built system with ZFS raidz. Intel P4 2.4 Ghz 1GB Ram Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller (2) Intel Dual Port 1Gbit nics I have (5) 300GB disks in a Raidz1 with Zfs. I''ve created a couple of FS on this. /export/downloads /export/music /export/musicraw I''ve shared these out as well. First with ZFS ''zfs
2010 May 25
0
Magic parameter "-ec" of IOZone to increase the write performance of samba
Hi, I am measuring the performance of my newly bought NAS with IOZone. The NAS is of an embedded linux with samba installed. (CPU is Intel Atom) The IOZone reported that write performance to be over 1GBps while the file size less or equals to 1GB. Since the nic is 1Gbps, the maximum speed is supposed to be 125MiBps at most. The testing report of IOZone is amazing. Later I found that If the
2007 Apr 25
2
SFTP and outstanding requests
I've been looking at the SFTP code and the filexfer RFC (and ended up answering my prior questions). I was wondering if anyone had any thoughts as to what might happen if the maximum number of outstanding requests was increased. Currently its set in sftp.c at /* Number of concurrent outstanding requests */ size_t num_requests = 16;
2012 Apr 28
1
SMB2 write performace slower than SMB1 in 10Gb network
Hi forks: I've been testing SMB2 with samba 3.6.4 performance these days, and I find a weird benchmark that SMB2 write performance is slower than SMB1 in 10Gb ethernet network. Server ----------------------- Linux: Redhat Enterprise 6.1 x64 Kernel: 2.6.31 x86_64 Samba: 3.6.4 (almost using the default configuration) Network: Chelsio T4 T420-SO-CR 10GbE network adapter RAID: Adaptec 51645 RAID
2012 Oct 11
0
samba performance downgrade with glusterfs backend
Hi folks, We found that samba performance downgrade a lot with glusterfs backend. volume info as followed, Volume Name: vol1 Type: Distribute Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: pana53:/data/ Options Reconfigured: auth.allow: 192.168.* features.quota: on nfs.disable: on Use dd (bs=1MB) or iozone (block=1MB) to test write performance, about 400MB/s. #dd
2017 Oct 11
0
iozone results
I'm testing iozone inside a VM booted from a gluster volume. By looking at network traffic on the host (the one connected to the gluster storage) I can see that a simple iozone -w -c -e -i 0 -+n -C -r 64k -s 1g -t 1 -F /tmp/gluster.ioz will make about 1200mbit/s on a bonded dual gigabit nic (probably, with a bad bonding mode configured) fio returns about 50000kB/s, that are 400000 kbps.
2016 Feb 17
2
Amount CPU's
Quick question. In my host, I've got two processors with each 6 cores and each core has two threads. I use iometer to do some testings on hard drive performance. I get the idea that using more cores give me better results in iometer. (if it will improve the speed of my guest is an other question...) For a Windows 2012 R2 server guest, can I just give the guest 24 cores? Just to make
2012 Aug 12
1
tuned-adm fixed Windows VM disk write performance on CentOS 6
On a 32bit Windows 2008 Server guest VM on a CentOS 5 host, iometer reported a disk write speed of 37MB/s The same VM on a CentOS 6 host reported 0.3MB/s. i.e. The VM was unusable. Write performance in a CentOS 6 VM was also much worse, but it was usable. (See http://lists.centos.org/pipermail/centos-virt/2012-August/002961.html) With iometer still running in the guest, I installed tuned on
2004 Jun 26
1
OCFS Performance on a Hitachi SAN
I've been reading this group for a while and I've noticed a variety of comments regarding running OCFS on top of path-management packages such as EMC's Powerpath, and it brought to mind a problem I've been having. I'm currently testing a six-node cluster connected to a Hitachi 9570V SAN storage array, using OCFS 1.0.12. I have six LUNs presented to the hosts using HDLM,
2012 Oct 01
3
Best way to measure performance of ZIL
Hi all, I currently have a OCZ Vertex 4 SSD as a ZIL device and am well aware of their exaggerated claims of sustained performance. I was thinking about getting a DRAM based ZIL accelerator such as Christopher George''s DDRDive, one of the STEC products, etc. Of course the key question i''m trying to answer is: is the price premium worth it? --- What is the (average/min/max)
2008 Feb 19
1
ZFS and small block random I/O
Hi, We''re doing some benchmarking at a customer (using IOzone) and for some specific small block random tests, performance of their X4500 is very poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically, the test is the IOzone multithreaded throughput test of an 8GB file size and 8KB record size, with the server physmem''d to 2GB. I noticed a couple of peculiar