similar to: how to graph iozone output using OpenOffice?

Displaying 20 results from an estimated 1000 matches similar to: "how to graph iozone output using OpenOffice?"

2007 Nov 19
0
Solaris 8/07 Zfs Raidz NFS dies during iozone test on client host
Hi, Well I have a freshly built system with ZFS raidz. Intel P4 2.4 Ghz 1GB Ram Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller (2) Intel Dual Port 1Gbit nics I have (5) 300GB disks in a Raidz1 with Zfs. I''ve created a couple of FS on this. /export/downloads /export/music /export/musicraw I''ve shared these out as well. First with ZFS ''zfs
2010 May 25
0
Magic parameter "-ec" of IOZone to increase the write performance of samba
Hi, I am measuring the performance of my newly bought NAS with IOZone. The NAS is of an embedded linux with samba installed. (CPU is Intel Atom) The IOZone reported that write performance to be over 1GBps while the file size less or equals to 1GB. Since the nic is 1Gbps, the maximum speed is supposed to be 125MiBps at most. The testing report of IOZone is amazing. Later I found that If the
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello: Sorry for asking iozone ques in this mailing list but couldn't find any mailing list on iozone... In IOZone, is there a way to configure # of outstanding requests client sends to server side? Something on the lines of IOMeter option "Number of outstanding requests". Thanks a lot!
2017 Oct 11
0
iozone results
I'm testing iozone inside a VM booted from a gluster volume. By looking at network traffic on the host (the one connected to the gluster storage) I can see that a simple iozone -w -c -e -i 0 -+n -C -r 64k -s 1g -t 1 -F /tmp/gluster.ioz will make about 1200mbit/s on a bonded dual gigabit nic (probably, with a bad bonding mode configured) fio returns about 50000kB/s, that are 400000 kbps.
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL, IHAC who would like to use Sun Fire X4500 to be the NFS server for the backend services, and would like to see the potential performance gain comparing to their existing systems. However the outputs of the I/O stress test with iozone show the mixed results as follows: * The read performance sharply degrades (almost down to 1/20, i.e from 2,000,000 down to 100,000) when the
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone [1] on it results in an oops [2]. remove_suid is called, accessing offset 14 of a NULL pointer. Let me know if you''d like me to test any fix, do further debugging or get more information. Thanks, Daniel --- [1] # mkfs.btrfs /dev/sda4 # mount /dev/sda4 /mnt /mnt# iozone -a . --- [2] [ 899.118926] BUG: unable to
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet. #] iozone -r 32k -r 512k -s 8G KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 8388608 32 10559 9792 62435 62260 8388608 512 63012 63409 63409 63138 It seems 32k write/rewrite performance are very
2008 Dec 19
0
Friday Dec 19th at Noon ET: Jazinga pbx appliance
Hi all, Get your questions ready as tomorrow's VUC call will feature Shidan Gouran, CTO of Jazinga, makers of a new Asterisk appliance. Jazinga have developed a web 2.0 GUI for their embedded Asterisk appliance. We all love GUIs, right? They want to make it easy for a non-techie to setup a small office Asterisk solution. For details about the Jazinga product you can see Michael Graves'
2010 Aug 06
0
Re: PATCH 3/6 - direct-io: do not merge logically non-contiguous requests
On Fri, May 21, 2010 at 15:37:45AM -0400, Josef Bacik wrote: > On Fri, May 21, 2010 at 11:21:11AM -0400, Christoph Hellwig wrote: >> On Wed, May 19, 2010 at 04:24:51PM -0400, Josef Bacik wrote: >> > Btrfs cannot handle having logically non-contiguous requests submitted. For >> > example if you have >> > >> > Logical: [0-4095][HOLE][8192-12287]
2004 Jul 06
0
A iozone test results for svn 1226 ocfs2 code on IPF platfrom
Skipped content of type multipart/alternative-------------- next part -------------- A non-text attachment was scrubbed... Name: report_ia64.xls Type: application/vnd.ms-excel Size: 62464 bytes Desc: report_ia64.xls Url : http://oss.oracle.com/pipermail/ocfs2-devel/attachments/20040702/68304798/report_ia64-0001.xls
2011 Dec 08
0
folder with no permissions
Hi Matt, Can you please provide us with more information? 1. what version of glusterfs are you using 2. Was the iozone run as root or user? a. if user, did it have the required permissions? 3. steps to reproduce the problem 4. Any other errors related to stripe in the clinet log? With regards, Shishir ________________________________________ From: gluster-users-bounces at gluster.org
2010 Jul 06
0
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi Jeff, On 07/03/2010 03:58 AM, Jeff Moyer wrote: > Hi, > > Running iozone or fs_mark with fsync enabled, the performance of CFQ is > far worse than that of deadline for enterprise class storage when dealing > with file sizes of 8MB or less. I used the following command line as a > representative test case: > > fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s
2017 Sep 11
0
3.10.5 vs 3.12.0 huge performance loss
Here are my results: Summary: I am not able to reproduce the problem, IOW I get relatively equivalent numbers for sequential IO when going against 3.10.5 or 3.12.0 Next steps: - Could you pass along your volfile (both for a brick and also the client vol file (from /var/lib/glusterd/vols/<yourvolname>/patchy.tcp-fuse.vol and a brick vol file from the same place) - I want to check
2008 Feb 01
2
Un/Expected ZFS performance?
I''m running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database "storage" in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of "X" (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen, I''m currently testing a new setup for a ZFS based storage system with dedup enabled. The system is setup on OI 148, which seems quite stable w/ dedup enabled (compared to the OpenSolaris snv_136 build I used before). One issue I ran into, however, is quite baffling: With iozone set to 32 threads, ZFS''s ARC seems to consume all available memory, making
2008 Feb 19
1
ZFS and small block random I/O
Hi, We''re doing some benchmarking at a customer (using IOzone) and for some specific small block random tests, performance of their X4500 is very poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically, the test is the IOzone multithreaded throughput test of an 8GB file size and 8KB record size, with the server physmem''d to 2GB. I noticed a couple of peculiar
2015 Apr 14
0
Re: VM Performance using KVM Vs. VMware ESXi
Dear Jatin, Maybe it’s a good idea first to implement Spice: <video> <model type='qxl' ram='65536' vram='65536' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video>
2006 Feb 09
0
strange behaviour of domU - i/o performance tests
Hi, I am currently making some i/o performance tests within domUs with iozone3. One scenario is a domU with file-backed VBDs (root and swap) as sda1 and sda2 lying on an nfs-kernel-server (2.6.14.4 - Debian Sarge). The exact iozone command is: iozone -a -R -b result.xls -f /tmp/iozone.test -n 1m -g 256m -i 0 -i 1 As soon as the iozone-test comes to a filesize of 132M, the complete! system
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD? Regards, Bartosz > Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47: > > Hi gluster users, > I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for