Hi, Seeing a drop-off in iops when more vcpu''s are added:- 3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0 dom0_max_vcpus=2 dom0_vcpus_pin domU 8 cores fio result 145k iops domU 10 cores fio result 99k iops domU 12 cores fio result 89k iops domU 14 cores fio result 81k iops ioping . -c 3 4096 bytes from . (ext4 /dev/xvda1): request=1 time=0.1 ms 4096 bytes from . (ext4 /dev/xvda1): request=2 time=0.7 ms 4096 bytes from . (ext4 /dev/xvda1): request=3 time=0.8 ms --- . (ext4 /dev/xvda1) ioping statistics --- 3 requests completed in 2002.0 ms, 1836 iops, 7.2 mb/s min/avg/max/mdev = 0.1/0.5/0.8/0.3 ms The initial ioping response is good, then a lot of latency with later ones. Any ideas ? Thanks John
Hi!> Seeing a drop-off in iops when more vcpu''s are added:- > > 3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0 > > dom0_max_vcpus=2 dom0_vcpus_pin > > domU 8 cores fio result 145k iops[...]> domU 14 cores fio result 81k iops[...]> The initial ioping response is good, then a lot of latency with later ones. Any ideas ?There are several possible optimizations: * At first I''d try to set the I/O-Scheduler for all DomUs to ''noop''; there is no benefit in scheduling I/Os twice. * Second thing I''d try is to give Dom0 more scheduler weight (xm sched-credit -d 0 -w 512). Now do the benchmarks again and try to find the bottlenecks by using dstat/iostat/... to find out what is going on. Probably rebalancing interrupts on the Dom0 may help (either manual or with the help of irqbalance). There are several other optimizations possible, but most of them depend on your usecase (like increasing RAM on Dom0 for caching, changeing readahead settings in Dom0 for the storage backend to better fit the I/O requests or adding more CPUs to Dom0 to be able to better handle the storage backend and things like that.) Please post your results! -- Adi
Hi, It was a typo in the config file (cpus=) that caused the drop in iops. Now I get 145k consistently. The other issue still remains, very low ioping output, especially as the first comes back in 1ms then the rest take 6 times as long Thanks John On 19 Mar 2013, at 09:07, Adi Kriegisch <adi@cg.tuwien.ac.at> wrote:> Hi! > >> Seeing a drop-off in iops when more vcpu''s are added:- >> >> 3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0 >> >> dom0_max_vcpus=2 dom0_vcpus_pin >> >> domU 8 cores fio result 145k iops > [...] >> domU 14 cores fio result 81k iops > [...] >> The initial ioping response is good, then a lot of latency with later ones. Any ideas ? > There are several possible optimizations: > * At first I''d try to set the I/O-Scheduler for all DomUs to ''noop''; there is > no benefit in scheduling I/Os twice. > * Second thing I''d try is to give Dom0 more scheduler weight > (xm sched-credit -d 0 -w 512). > Now do the benchmarks again and try to find the bottlenecks by using > dstat/iostat/... to find out what is going on. Probably rebalancing > interrupts on the Dom0 may help (either manual or with the help of > irqbalance). > There are several other optimizations possible, but most of them depend on > your usecase (like increasing RAM on Dom0 for caching, changeing readahead > settings in Dom0 for the storage backend to better fit the I/O requests or > adding more CPUs to Dom0 to be able to better handle the storage backend > and things like that.) > > Please post your results! > > -- Adi
Maybe Matching Threads
- [PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
- [PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
- Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
- [RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
- [RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk