search for: iometer

Displaying 20 results from an estimated 48 matches for "iometer".

2012 Apr 28
1
SMB2 write performace slower than SMB1 in 10Gb network
...guration) Network: Chelsio T4 T420-SO-CR 10GbE network adapter RAID: Adaptec 51645 RAID Controller (Writeback RAID0 with 16 * 1TB SATA II disks) Filesystem: xfs (barrier off) Clinet ----------------------- Windows 2008 Server R2 64bit Network: Chelsio T4 T420-SO-CR 10GbE network adapter Test tool: Iometer Iometer configuration: Normal I/O test policy, 1MB sequential read/write Every Iometer test run by 3 minutes, Iometer test file size is 180GB. Server and client are connected directly with fabric links, without any 10GbE switches. I use Iometer to test normal file read/write performance, at fir...
2016 Feb 17
2
Amount CPU's
Quick question. In my host, I've got two processors with each 6 cores and each core has two threads. I use iometer to do some testings on hard drive performance. I get the idea that using more cores give me better results in iometer. (if it will improve the speed of my guest is an other question...) For a Windows 2012 R2 server guest, can I just give the guest 24 cores? Just to make shure the os has all the...
2012 Aug 12
1
tuned-adm fixed Windows VM disk write performance on CentOS 6
On a 32bit Windows 2008 Server guest VM on a CentOS 5 host, iometer reported a disk write speed of 37MB/s The same VM on a CentOS 6 host reported 0.3MB/s. i.e. The VM was unusable. Write performance in a CentOS 6 VM was also much worse, but it was usable. (See http://lists.centos.org/pipermail/centos-virt/2012-August/002961.html) With iometer still running in...
2010 Jul 25
1
VMGuest IOMeter numbers
Hello, first time posting. I''ve been working with zfs on and off with limited *nix experience for a year or so now, and have read a lot of things by a lot of you I''m sure. Still tons I don''t understand/know I''m sure. We''ve been having awful IO latencies on our 7210 running about 40 VM''s spread over 3 hosts, no SSD''s / Intent Logs.
2012 Oct 01
3
Best way to measure performance of ZIL
...used as a NFS share to ESXi hosts which forces sync writes only (i.e will it be noticeable in an end-to-end context)? I''ve been looking around and haven''t found a succinct way of measuring the latency of an individual device when used as a ZIL in a zpool. I am experienced using iometer to measure individual devices, but as always it isn''t easy to decompose that benchmark to determine where the bottlenecks occur. It is possible to run multiple tests with multiple hardware configurations and compare iometer results, but i''m trying to avoid having to buy the ZIL a...
2008 Feb 02
17
New binary release of GPL PV drivers for Windows
I''ve just uploaded a new binary release of the GPL PV drivers for Windows - release 0.6.3. Fixes in this version are: . Should now work on any combination of front and backend bit widths (32, 32p, 64). Previous crashes that I thought were due to Intel arch were actually due to this problem. . The vbd driver will now not enumerate ''boot'' disks (eg those normally serviced
2008 Feb 02
17
New binary release of GPL PV drivers for Windows
I''ve just uploaded a new binary release of the GPL PV drivers for Windows - release 0.6.3. Fixes in this version are: . Should now work on any combination of front and backend bit widths (32, 32p, 64). Previous crashes that I thought were due to Intel arch were actually due to this problem. . The vbd driver will now not enumerate ''boot'' disks (eg those normally serviced
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
...sh: > "virsh migrate --live --copy-storage-all --verbose vm1 > qemu+ssh://192.168.1.91/system tcp://192.168.1.91". I have checked the > libvirt debug output, and make sure that the drive-mirror + NBD > migration method is used. > > Inside the VM, I use an I/O benchmark (Iometer) to generate an oltp > workload. I record the I/O performance (IOPS) before/during/after > migration. When the migration begins, the IOPS dropped by 30%-40%. > This is reasonable, because the migration I/O competes with the > workload I/O. However, during almost the last period of migra...
2014 Dec 03
2
Problem with AIO random read
Hello list, I setup Iometer to test AIO for 100% random read. If "Transfer Request Size" is more than or equal to 256 kilobytes,in the beginning the transmission is good. But 3~5 seconds later,the throughput will drop to zero. Server OS: Ubuntu Server 14.04.1 LTS Samba: Version 4.1.6-Ubuntu Dialect: SMB 2.0 AIO...
2010 Dec 02
3
Performance testing tools for Windows guests
Hi all, could you please point me to performance testing tools for Windows guests, mainly to see what their performance is for local storage. thx! B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2010 Jul 05
21
Aoe or iScsi???
Hi people... Here we use Xen 4 with Debian Lenny... We''re using kernel 2.6.31.13 pvops... As a storage system, we use AoE devices... So, we installed VM''s on AoE partition... The "NAS" server is a Intel based baremetal with SATA hard disc... However, sometime I feeling that VM''s is so slow... Also, all VM has GPLPV drivers installed... So, I am thing about
2016 Feb 18
0
Re: Amount CPU's
On Wed, Feb 17, 2016 at 07:14:33PM +0000, Dominique Ramaekers wrote: >Quick question. > >In my host, I've got two processors with each 6 cores and each core has >two threads. > >I use iometer to do some testings on hard drive performance. > >I get the idea that using more cores give me better results in >iometer. (if it will improve the speed of my guest is an other >question...) > >For a Windows 2012 R2 server guest, can I just give the guest 24 cores? >Just to mak...
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello: Sorry for asking iozone ques in this mailing list but couldn't find any mailing list on iozone... In IOZone, is there a way to configure # of outstanding requests client sends to server side? Something on the lines of IOMeter option "Number of outstanding requests". Thanks a lot!
2012 Mar 09
2
btrfs_search_slot BUG...
..., we quickly see btrfs_search_slot returning positive numbers, popping an assertion [2]. Are >4KB block sizes known broken for now? Thanks, Daniel --- [1] mkfs.btrfs -m raid1 -d raid1 -l 16k -n 16k /dev/sda /dev/sdb mount /dev/sda /store && cd /store fio /usr/share/doc/fio/examples/iometer-file-access-server --- [2] kernel BUG at /home/apw/COD/linux/fs/btrfs/extent-tree.c:1481! -- Daniel J Blueman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/m...
2018 May 28
0
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
...ate --live --copy-storage-all --verbose vm1 > > qemu+ssh://192.168.1.91/system tcp://192.168.1.91". I have checked the > > libvirt debug output, and make sure that the drive-mirror + NBD > > migration method is used. > > > > Inside the VM, I use an I/O benchmark (Iometer) to generate an oltp > > workload. I record the I/O performance (IOPS) before/during/after > > migration. When the migration begins, the IOPS dropped by 30%-40%. > > This is reasonable, because the migration I/O competes with the > > workload I/O. However, during almost the...
2008 Oct 02
1
Terrible performance when setting zfs_arc_max snv_98
...nSolaris snv_98, and the rest is configured as striped mirrors as a zpool. I created a zfs filesystem on this pool with a blocksize of 8K. This server has 64GB of memory and will be running postgreSQL, so we need to cut down ARC memory usage. But before I do this I tested the zfs performance using iometer (it was a bit tricky getting it to compile but it''s running). So far so good. Figures look very promissing, with stagering random read and write figures! There are just a few problems: every few seconds, disk LED''s stop working for a few seconds, except one disk at a time. When t...
2008 Jun 30
18
Unable to remove GPLPV drivers without breaking win2k3 domU
I have a Win2K3 domU (and thankfully an image backup of the LVM volume that holds its system disk) I previously installed GPLPV v0.8.9 drivers, the domU boots OK with or without the /GPLPV switch in boot.ini, however with the /GPLPV switch it tries and fails to use the Xen network driver, and so the machine is network-less, so for the past few months I''ve left it using pure HVM
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
...he PCI-X cards. The problem I have is that disk access seems to stop for a few seconds and then continue. This happens every few seconds and the end result is that the performance is terrible and unusable. The idea was to use this box for serving iSCSI to a Windows 2003 Server. However with IOmeter on the Windows box and looking at Task manager i noticed that the speed pulses from 90% to 0% all the time. Investigating further I noticed that I get the same behavior during a simple cp on the localhost. /usr/X11/bin/scanpci gives me this information pci bus 0x0006 cardnum 0x01 function 0x0...
2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list, someone (actually neil perrin (CC)) mentioned in this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html that is should be possible to import a pool with failed log devices (with or without data loss ?). >/ />/ Has the following error no consequences? />/ />/ Bug ID 6538021 />/ Synopsis Need a way to force pool startup when
2016 Apr 11
0
High Guest CPU Utilization when using libgfapi
...privileged time. I also tested the workload with volume from NFS and the CPU utilization is only around 5%. For gluster fuse, the CPU utilization is also not as high as 90%, around 30-40%. Is there any reason why is that so? Appreciate on any comments and suggestions. Details. Workload generator: IOMeter Instance: Windows 2008R2 Openstack Host: CentOS 7 qemu-kvm version: qemu-kvm-ev-2.1.2-23.el7.1.x86_64 Workload: 4kb random write, 4 workers with each worker configure to have 4 outstanding IOs Gluster server and client version: 3.7.8 Gluster info Volume Name: g37test Type: Distribute Volume ID: 27...