similar to: measuring iops on linux - numbers make sense?

Displaying 20 results from an estimated 3000 matches similar to: "measuring iops on linux - numbers make sense?"

2008 Jul 06
2
Measuring ZFS performance - IOPS and throughput
Can anybody tell me how to measure the raw performance of a new system I''m putting together? I''d like to know what it''s capable of in terms of IOPS and raw throughput to the disks. I''ve seen Richard''s raidoptimiser program, but I''ve only seen results for random read iops performance, and I''m particularly interested in write
2013 Mar 18
2
Disk iops performance scalability
Hi, Seeing a drop-off in iops when more vcpu''s are added:- 3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0 dom0_max_vcpus=2 dom0_vcpus_pin domU 8 cores fio result 145k iops domU 10 cores fio result 99k iops domU 12 cores fio result 89k iops domU 14 cores fio result 81k iops ioping . -c 3 4096 bytes from . (ext4 /dev/xvda1): request=1 time=0.1 ms 4096 bytes
2014 Jan 24
2
IOPS required by Asterisk for Call Recording
Hi What are the disk IOPS required for Asterisk call recording? I am trying to find out number of disks required in RAID array to record 500 calls. Is there any formula to calculate IOPS required by Asterisk call recording? This will help me to find IOPS for different scale. If I assume that Asterisk will write data on disk every second for each call, I will need disk array to support minimum
2014 Nov 18
1
Storage IOPs Calculation for Qmail Server
Dear DovecotORG, In my organization, we are about to implement Qmail Server. * The number of current users will be 800, in future it may increase upto 1200. * The number of concurrent users will be 300. I am the engineer to deploy the Qmail in Linux server. I need to tell the storage team on the IOPs requirement. I requested 8TB usable space for the mail storage (can
2016 Feb 03
6
Measuring memory bandwidth utilization
I'd like to know what the cause of a particular DB server's slowdown might be. We've ruled out IOPs for the disks (~ 20%) and raw CPU load (top shows perhaps 1/2 of cores busy, but the system slows to a crawl. We're suspecting that we're simply running out of memory bandwidth but have no way to confirm this suspicion. Is there a way to test for this? Think: iostat but for
2004 Feb 26
0
Iops Vorbis player
Hi, does anyone have, or know much about, the Iops portable players? Are they better/worse than the players from iriver? I am looking for a USB-thumb-drive style of music player (that plays vorbis of course). >From what I have been able to gather from the Iops website, it suits this purpose, but the site is all in Korean. I was hoping to find someone who had actually purchased one of these
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms). I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2009 May 28
2
[PATCH node] correctly use collectd udp dns entry
--- scripts/ovirt | 2 +- scripts/ovirt-config-collectd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/scripts/ovirt b/scripts/ovirt index 8296783..4a7cbc7 100755 --- a/scripts/ovirt +++ b/scripts/ovirt @@ -43,7 +43,7 @@ start() { log "skipping ovirt-awake, oVirt identify service not available" fi - find_srv collectd tcp +
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device. Before: seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec After: seq-read : io=1,024MB, bw=20,343KB/s,
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device. Before: seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec After: seq-read : io=1,024MB, bw=20,343KB/s,
2008 Nov 16
1
Opening the 2.4 commit fest (RRD)
On Sun, Nov 16, 2008 at 11:08 AM, Arnaud Quette <aquette.dev at gmail.com> wrote: >>> The remainder, until the -pre stage, will be: >>> - the Powerman support (through the powerman driver) for more PDUs >>> - the possible RRD integration into upslog >> >> It would be nice to have native RRD support in upslog, but we may also >> want to point
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP guests. If one CPU is doing virtqueue kick and another CPU touches the vblk->lock it will have to spin until virtqueue kick completes. This patch reduces system% CPU utilization in SMP guests that are running multithreaded I/O-bound workloads. The improvements are small but show as iops and SMP are increased. Khoa Huynh
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP guests. If one CPU is doing virtqueue kick and another CPU touches the vblk->lock it will have to spin until virtqueue kick completes. This patch reduces system% CPU utilization in SMP guests that are running multithreaded I/O-bound workloads. The improvements are small but show as iops and SMP are increased. Khoa Huynh
2009 Dec 24
1
high read iops - more memory for arc?
I''m running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB -> 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd. According to our tester, Oracle writes are extremely slow (high latency). Below is a snippet of iostat: r/s w/s
2010 Oct 12
2
Multiple SLOG devices per pool
I have a pool with a single SLOG device rated at Y iops. If I add a second (non-mirrored) SLOG device also rated at Y iops will my zpool now theoretically be able to handle 2Y iops? Or close to that? Thanks, Ray
2009 Jun 10
6
Asymmetric mirroring
Hello everyone, I''m wondering if the following makes sense: To configure a system for high IOPS, I want to have a zpool of 15K RPM SAS drives. For high IOPS, I believe it is best to let ZFS stripe them, instead of doing a raidz1 across them. Therefore, I would like to mirror the drives for reliability. Now, I''m wondering if I can get away with using a large capacity 7200
2014 Oct 22
0
config file locations
On Tuesday 21 October 2014 22:06:48 Charles Lepple did opine And Gene did reply: > Hi Gene, > > On Oct 21, 2014, at 9:12 PM, Gene Heskett <gheskett at wdtv.com> wrote: > >> configure: error: libgd not found, required for CGI build > >> > >> And gdlib does not appear to be available from the repo's. > > Sorry, I must have missed that message.
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you. With performance.write-behind-trickling-writes ON (default): ## 4k randwrite # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.1 Starting 1 process Jobs: 1
2018 Nov 07
2
Re: guestfs_launch() fails when C application is started as a systemd service
> That makes no sense because we are supposed to have just forked successfully I just realized libguestfs uses fork. Now we know why qemu-img worked - I launched it with popen. > So it must be something to do with collectd and how it runs programs. > Is it using LD_PRELOAD trickery, or replacing libc, or using seccomp? If I understand the question correctly - it's about how
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), who might have more insights here; and wrap long lines. On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote: > Hi, everyone. > > Recently I am doing some tests on the VM storage+memory migration with > KVM/QEMU/libvirt. I use the following migrate command through virsh: > "virsh migrate --live