search for: ioping

Displaying 20 results from an estimated 522 matches for "ioping".

Did you mean: hoping
2009 Dec 04
2
measuring iops on linux - numbers make sense?
Hello, When approaching hosting providers for services, the first question many of them asked us was about the amount of IOPS the disk system should support. While we stress-tested our service, we recorded between 4000 and 6000 "merged io operations per second" as seen in "iostat -x" and collectd (varies between the different components of the system, we have a few such
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device. Before: seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec After: seq-read : io=1,024MB, bw=20,343KB/s,
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device. Before: seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec After: seq-read : io=1,024MB, bw=20,343KB/s,
2013 Mar 18
2
Disk iops performance scalability
...p-off in iops when more vcpu''s are added:- 3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0 dom0_max_vcpus=2 dom0_vcpus_pin domU 8 cores fio result 145k iops domU 10 cores fio result 99k iops domU 12 cores fio result 89k iops domU 14 cores fio result 81k iops ioping . -c 3 4096 bytes from . (ext4 /dev/xvda1): request=1 time=0.1 ms 4096 bytes from . (ext4 /dev/xvda1): request=2 time=0.7 ms 4096 bytes from . (ext4 /dev/xvda1): request=3 time=0.8 ms --- . (ext4 /dev/xvda1) ioping statistics --- 3 requests completed in 2002.0 ms, 1836 iops, 7.2 mb/s min/avg/max/m...
2014 Jan 24
2
IOPS required by Asterisk for Call Recording
Hi What are the disk IOPS required for Asterisk call recording? I am trying to find out number of disks required in RAID array to record 500 calls. Is there any formula to calculate IOPS required by Asterisk call recording? This will help me to find IOPS for different scale. If I assume that Asterisk will write data on disk every second for each call, I will need disk array to support minimum
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun badged Seagate ST31000N in a J4400) ? I have a resilver running and am seeing about 700-800 writes/sec. on the hot spare as it resilvers. There is no other I/O activity on this box, as this is a remote replication target for production data. I have a the
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP guests. If one CPU is doing virtqueue kick and another CPU touches the vblk->lock it will have to spin until virtqueue kick completes. This patch reduces system% CPU utilization in SMP guests that are running multithreaded I/O-bound workloads. The improvements are small but show as iops and SMP are increased. Khoa Huynh
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP guests. If one CPU is doing virtqueue kick and another CPU touches the vblk->lock it will have to spin until virtqueue kick completes. This patch reduces system% CPU utilization in SMP guests that are running multithreaded I/O-bound workloads. The improvements are small but show as iops and SMP are increased. Khoa Huynh
2010 Oct 12
2
Multiple SLOG devices per pool
I have a pool with a single SLOG device rated at Y iops. If I add a second (non-mirrored) SLOG device also rated at Y iops will my zpool now theoretically be able to handle 2Y iops? Or close to that? Thanks, Ray
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
Hi folks, The following are initial virtio-scsi + target vhost benchmark results using multiple target LUNs per vhost and multiple virtio PCI adapters to scale the total number of virtio-scsi LUNs into a single KVM guest. The test setup is currently using 4x SCSI LUNs per vhost WWPN, with 8x virtio PCI adapters for a total of 32x 500MB ramdisk LUNs into a single guest, along with each backend
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
Hi folks, The following are initial virtio-scsi + target vhost benchmark results using multiple target LUNs per vhost and multiple virtio PCI adapters to scale the total number of virtio-scsi LUNs into a single KVM guest. The test setup is currently using 4x SCSI LUNs per vhost WWPN, with 8x virtio PCI adapters for a total of 32x 500MB ramdisk LUNs into a single guest, along with each backend
2013 Oct 07
2
Need help with plotting the graph
Hello All, The version of R I am using is as follows > version _ platform x86_64-pc-linux-gnu arch x86_64 os linux-gnu system x86_64, linux-gnu status major 2 minor 14.1 year 2011 month 12 day 22 svn rev 57956 language R version.string R version 2.14.1 (2011-12-22) I just few days
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos. Regards Victor -- This message posted from opensolaris.org
2019 Jan 05
1
Re: [PATCH nbdkit 0/7] server: Implement NBD_FLAG_CAN_MULTI_CONN.
Here are some more interesting numbers, concentrating on the memory plugin and RAM disks. All are done using 8 threads and multi-conn, on a single unloaded machine with 16 cores, using a Unix domain socket. (1) The memory plugin using the sparse array, as implemented upstream in 1.9.8: read: IOPS=103k, BW=401MiB/s (420MB/s)(46.0GiB/120002msec) write: IOPS=103k, BW=401MiB/s
2010 Jul 05
21
Aoe or iScsi???
Hi people... Here we use Xen 4 with Debian Lenny... We''re using kernel 2.6.31.13 pvops... As a storage system, we use AoE devices... So, we installed VM''s on AoE partition... The "NAS" server is a Intel based baremetal with SATA hard disc... However, sometime I feeling that VM''s is so slow... Also, all VM has GPLPV drivers installed... So, I am thing about
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you. With performance.write-behind-trickling-writes ON (default): ## 4k randwrite # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.1 Starting 1 process Jobs: 1
2006 May 09
4
ks.test one-sample - where can I get a list of the strings specifying the distribution?
Dear all, One can use ks.test(x,y) for a one-sample kolmogorov-smirnov test: x being the data sample y being a string specifying a distribution I notice the help on ks.test does not tell you how to get such a list. Is this a hole in my R knowledge? Where can I get a list of the strings specifying the possible distributions? and more specifically What would be the string and following
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), who might have more insights here; and wrap long lines. On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote: > Hi, everyone. > > Recently I am doing some tests on the VM storage+memory migration with > KVM/QEMU/libvirt. I use the following migrate command through virsh: > "virsh migrate --live
2005 Sep 07
7
Asynchronous IO
Hi, I have installed Xen on Linux 2.6.11.10 <http://2.6.11.10> and i am trying to do Asynchronous Direct IO on SAS drives. The application which does the asynchronous direct io on SAS drive is running on Domain 0. Actually the IOPs what i get for a 512Bytes IO size is 67, but if i do the same operation on Linux 2.6.11.10 <http://2.6.11.10> native kernel, i get 267 IOPs.Cananyone
2014 Nov 18
1
Storage IOPs Calculation for Qmail Server
Dear DovecotORG, In my organization, we are about to implement Qmail Server. * The number of current users will be 800, in future it may increase upto 1200. * The number of concurrent users will be 300. I am the engineer to deploy the Qmail in Linux server. I need to tell the storage team on the IOPs requirement. I requested 8TB usable space for the mail storage (can