search for: ioengine

Displaying 20 results from an estimated 74 matches for "ioengine".

2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you. With performance.write-behind-trickling-writes ON (default): ## 4k randwrite # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.1 Starting 1 process Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=17.3MiB/s][r=0,...
2018 Mar 20
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod <mailinglists at smcleod.net> wrote: > Hi Raghavendra, > > > On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa <rgowdapp at redhat.com> > wrote: > > Aggregating large number of small writes by write-behind into large writes > has been merged on master: > https://github.com/gluster/glusterfs/issues/364 > >
2014 Mar 23
0
for Chris Mason ( iowatcher graphs)
...nd run blktrace from dom0 like this blktrace -w 60 -d /dev/disk/vbd/21-920 -o - > test.trace /dev/disk/vbd/21-920 is the software raid contains 2 lv volumes , each lv volume created in big srp attached disk Inside vm i try to do some work via fio: [global] rw=randread size=128m directory=/tmp ioengine=libaio iodepth=4 invalidate=1 direct=1 [bgwriter] rw=randwrite iodepth=32 [queryA] iodepth=1 ioengine=mmap direct=0 thinktime=3 [queryB] iodepth=1 ioengine=mmap direct=0 thinktime=5 [bgupdater] rw=randrw iodepth=16 thinktime=40 size=128m After that i try to get graph like iowatcher -t test.tra...
2017 Oct 10
2
small files performance
...(Note : readdir > should be on) > This is what i'm getting with suggested parameters. I'm running "fio" from a mounted gluster client: 172.16.0.12:/gv0 on /mnt2 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) # fio --ioengine=libaio --filename=fio.test --size=256M --direct=1 --rw=randrw --refill_buffers --norandommap --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=fio-test fio-test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iod...
2012 Mar 25
3
attempt to access beyond end of device and livelock
...nd end of device ram0: rw=129, want=8452072, limit=4096000 ... --- [2] modprobe brd rd_size=2048000 (or boot with ramdisk_size=2048000) mkfs.btrfs -m raid0 /dev/ram0 /dev/ram1 mount /dev/ram0 /mnt -o discard fio testcase umount /mnt --- [3] testcase [global] directory=/mnt rw=randread size=256m ioengine=libaio iodepth=4 invalidate=1 direct=1 [bgwriter] rw=randwrite iodepth=32 [queryA] iodepth=1 ioengine=mmap thinktime=3 [queryB] iodepth=1 ioengine=mmap thinktime=1 [bgupdater] rw=randrw iodepth=16 thinktime=1 size=32m -- Daniel J Blueman -- To unsubscribe from this list: send the line "un...
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
...ed image support to vhost-blk once we have in-kernel AIO interface. There are some work in progress for in-kernel AIO interface from Dave Kleikamp and Zach Brown: http://marc.info/?l=linux-fsdevel&m=133312234313122 Performance evaluation: ----------------------------- LKVM: Fio with libaio ioengine on 1 Fusion IO device IOPS(k) Before After Improvement seq-read 107 121 +13.0% seq-write 130 179 +37.6% rnd-read 102 122 +19.6% rnd-write 125 159 +27.0% QEMU: Fio with libaio ioengine on 1 Fusion IO device IOPS(k) Before After Imp...
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
...ed image support to vhost-blk once we have in-kernel AIO interface. There are some work in progress for in-kernel AIO interface from Dave Kleikamp and Zach Brown: http://marc.info/?l=linux-fsdevel&m=133312234313122 Performance evaluation: ----------------------------- LKVM: Fio with libaio ioengine on 1 Fusion IO device IOPS(k) Before After Improvement seq-read 107 121 +13.0% seq-write 130 179 +37.6% rnd-read 102 122 +19.6% rnd-write 125 159 +27.0% QEMU: Fio with libaio ioengine on 1 Fusion IO device IOPS(k) Before After Imp...
2017 Oct 10
0
small files performance
...t; This is what i'm getting with suggested parameters. > I'm running "fio" from a mounted gluster client: > 172.16.0.12:/gv0 on /mnt2 type fuse.glusterfs > (rw,relatime,user_id=0,group_id=0,default_permissions, > allow_other,max_read=131072) > > > > # fio --ioengine=libaio --filename=fio.test --size=256M > --direct=1 --rw=randrw --refill_buffers --norandommap > --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 > --runtime=60 --group_reporting --name=fio-test > fio-test: (g=0): rw=randrw, bs=8K-8K/8K-8K/8K-8K,...
2014 May 30
4
[PATCH] block: virtio_blk: don't hold spin lock during world switch
...'t good to hold the lock and block other vCPUs. On arm64 quad core VM(qemu-kvm), the patch can increase I/O performance a lot with VIRTIO_RING_F_EVENT_IDX enabled: - without the patch: 14K IOPS - with the patch: 34K IOPS fio script: [global] direct=1 bsrange=4k-4k timeout=10 numjobs=4 ioengine=libaio iodepth=64 filename=/dev/vdc group_reporting=1 [f1] rw=randread Cc: Rusty Russell <rusty at rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: virtualization at lists.linux-foundation.org Signed-off-by: Ming Lei <ming.lei at canonical.com> -...
2014 May 30
4
[PATCH] block: virtio_blk: don't hold spin lock during world switch
...'t good to hold the lock and block other vCPUs. On arm64 quad core VM(qemu-kvm), the patch can increase I/O performance a lot with VIRTIO_RING_F_EVENT_IDX enabled: - without the patch: 14K IOPS - with the patch: 34K IOPS fio script: [global] direct=1 bsrange=4k-4k timeout=10 numjobs=4 ioengine=libaio iodepth=64 filename=/dev/vdc group_reporting=1 [f1] rw=randread Cc: Rusty Russell <rusty at rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: virtualization at lists.linux-foundation.org Signed-off-by: Ming Lei <ming.lei at canonical.com> -...
2017 Sep 09
2
GlusterFS as virtual machine storage
...r one during FUSE test, so it had to crash immediately (only one of three nodes were actually up). This definitely happened for the first time (only one node had been killed yesterday). Using FUSE seems to be OK with replica 3. So this can be gfapi related or maybe rather libvirt related. I tried ioengine=gfapi with fio and job survived reboot. -ps On Sat, Sep 9, 2017 at 8:05 AM, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi, > > On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote: >> Pavel. >> >> Is there a difference between native cli...
2017 Jul 31
2
read/write performance through mount point by guestmount
I create a *4x256GB-SSD RAID0(/dev/md1)* and I test the performance through fio. fio config: ioengine=libaiodirect=1time_basedruntime=120ramp_time=30size=100g The sequential read/write performance is: > > *read: 2000MB/swrite: 1800MB/s* Now I make a ext4 on the RAID0(/dev/md1) and mount on /home/. ANd I create a 100G-disk.qcow2 by guestfish. > $guestfish > ><fish>: disk-cr...
2017 Sep 09
0
GlusterFS as virtual machine storage
...; crash immediately (only one of three nodes were actually up). This > definitely happened for the first time (only one node had been killed > yesterday). > > Using FUSE seems to be OK with replica 3. So this can be gfapi related > or maybe rather libvirt related. > > I tried ioengine=gfapi with fio and job survived reboot. > > > -ps So, to recap: - with gfapi, your VMs crashes/mount read-only with a single node failure; - with gpapi also, fio seems to have no problems; - with native FUSE client, both VMs and fio have no problems at all. Is it correct? Thanks. --...
2013 Jan 31
4
[RFC][PATCH 2/2] Btrfs: implement unlocked dio write
...he race between dio write and punch hole, because we have extent lock to protect our operation. I ran fio to test the performance of this feature. == Hardware == CPU: Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz Mem: 2GB SSD: Intel X25-M 120GB (Test Partition: 60GB) == config file == [global] ioengine=psync direct=1 bs=4k size=32G runtime=60 directory=/mnt/btrfs/ filename=testfile group_reporting thread [file1] numjobs=1 # 2 4 rw=randwrite == result (KBps) == write 1 2 4 lock 24936 24738 24726 nolock 24962 30866 32101 == result (iops) == write 1 2 4 lock 6234 6184 6181 nolock 6240 7716 8025...
2012 Jul 12
3
[PATCH v2] Btrfs: improve multi-thread buffer read
...o hold more pages and reduce the number of bios we need. Here is some numbers taken from fio results: w/o patch w patch ------------- -------- --------------- READ: 745MB/s +32% 987MB/s [1]: [global] group_reporting thread numjobs=4 bs=32k rw=read ioengine=sync directory=/mnt/btrfs/ [READ] filename=foobar size=2000M invalidate=1 Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com> --- v1->v2: if we fail to make a allocation, just fall back to the old way to read page. fs/btrfs/extent_io.c | 41 +++++++++++++++++++++++++++++++++++++++...
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
...usr=37.25 minf=19349958.33 cpu sys=723.63 majf=27597.33 ctx=850199927.33 usr=35.35 minf=19092343.00 FIO config file: [global] exec_prerun="echo 3 > /proc/sys/vm/drop_caches" group_reporting norandommap ioscheduler=noop thread bs=512 size=4MB direct=1 filename=/dev/vdb numjobs=256 ioengine=aio iodepth=64 loops=3 Signed-off-by: Stefan Hajnoczi <stefanha at linux.vnet.ibm.com> --- Other block drivers (cciss, rbd, nbd) use spin_unlock_irq() so I followed that. To me this seems wrong: blk_run_queue() uses spin_lock_irqsave() but we enable irqs with spin_unlock_irq(). If the calle...
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
...usr=37.25 minf=19349958.33 cpu sys=723.63 majf=27597.33 ctx=850199927.33 usr=35.35 minf=19092343.00 FIO config file: [global] exec_prerun="echo 3 > /proc/sys/vm/drop_caches" group_reporting norandommap ioscheduler=noop thread bs=512 size=4MB direct=1 filename=/dev/vdb numjobs=256 ioengine=aio iodepth=64 loops=3 Signed-off-by: Stefan Hajnoczi <stefanha at linux.vnet.ibm.com> --- Other block drivers (cciss, rbd, nbd) use spin_unlock_irq() so I followed that. To me this seems wrong: blk_run_queue() uses spin_lock_irqsave() but we enable irqs with spin_unlock_irq(). If the calle...
2017 Sep 06
2
GlusterFS as virtual machine storage
...plicated volume with arbiter (2+1) and VM on KVM (via Openstack) > with disk accessible through gfapi. Volume group is set to virt > (gluster volume set gv_openstack_1 virt). VM runs current (all > packages updated) Ubuntu Xenial. > > I set up following fio job: > > [job1] > ioengine=libaio > size=1g > loops=16 > bs=512k > direct=1 > filename=/tmp/fio.data2 > > When I run fio fio.job and reboot one of the data nodes, IO statistics > reported by fio drop to 0KB/0KB and 0 IOPS. After a while, root > filesystem gets remounted as read-only. > > If y...
2017 Sep 06
0
GlusterFS as virtual machine storage
...uster 3.10.5 on CentOS 7. I created replicated volume with arbiter (2+1) and VM on KVM (via Openstack) with disk accessible through gfapi. Volume group is set to virt (gluster volume set gv_openstack_1 virt). VM runs current (all packages updated) Ubuntu Xenial. I set up following fio job: [job1] ioengine=libaio size=1g loops=16 bs=512k direct=1 filename=/tmp/fio.data2 When I run fio fio.job and reboot one of the data nodes, IO statistics reported by fio drop to 0KB/0KB and 0 IOPS. After a while, root filesystem gets remounted as read-only. If you care about infrastructure, setup details etc., do...
2023 Mar 28
1
[PATCH v6 11/11] vhost: allow userspace to create workers
For vhost-scsi with 3 vqs and a workload like that tries to use those vqs like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 the single vhost worker thread will become a bottlneck and we are stuck at around 500K IOPs no matter how many jobs, virtqueues, and CPUs are used. To better utilize virtqueues and available CPUs, this patch allows userspace to create workers and bind them to vqs...