Displaying 8 results from an estimated 8 matches for "randrepeat".
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you.
With performance.write-behind-trickling-writes ON (default):
## 4k randwrite
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=17...
2018 Mar 20
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod <mailinglists at smcleod.net>
wrote:
> Hi Raghavendra,
>
>
> On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa <rgowdapp at redhat.com>
> wrote:
>
> Aggregating large number of small writes by write-behind into large writes
> has been merged on master:
> https://github.com/gluster/glusterfs/issues/364
>
>
2013 Jan 21
1
btrfs_start_delalloc_inodes livelocks when creating snapshot under IO
...me details about my setup:
I am testing for-linus Chris''s branch
I have one subvolume with 8 large files (10GB each).
I am running two fio processes (one per file, so only 2 out of 8 files
are involved) with 8 threads each like this:
fio --thread --directory=/btrfs/subvol1 --rw=randwrite --randrepeat=1
--fadvise_hint=0 --fallocate=posix --size=1000m --filesize=10737418240
--bsrange=512b-64k --scramble_buffers=1 --nrfiles=1 --overwrite=1
--ioengine=sync --filename=file-1 --name=job0 --name=job1 --name=job2
--name=job3 --name=job4 --name=job5 --name=job6 --name=job7
The files are preallocated wit...
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...entOS1
And install everything on the single vda disc with LVM (i use defaults
in anaconda, but remove the large /home to prevent SSD beeing over used).
After install and reboot log in to VM and
yum install epel-release -y && yum install screen fio htop -y
and then run disk test:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
--name=test *--filename=test* --bs=4k --iodepth=64 --size=4G
--readwrite=randrw --rwmixread=75
then *keep repeating *but *change the filename* attribute so it does not
use the same blocks over and over again.
In the beginning the performance is gr...
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
...ngle vda disc with LVM (i use defaults in
> anaconda, but remove the large /home to prevent SSD beeing over used).
>
> After install and reboot log in to VM and
>
> yum install epel-release -y && yum install screen fio htop -y
>
> and then run disk test:
>
> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test *--filename=test* --bs=4k --iodepth=64 --size=4G
> --readwrite=randrw --rwmixread=75
>
> then *keep repeating *but *change the filename* attribute so it does not
> use the same blocks over and over again.
>
> In the b...
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...> defaults in anaconda, but remove the large /home to prevent SSD
> beeing over used).
>
> After install and reboot log in to VM and
>
> yum install epel-release -y && yum install screen fio htop -y
>
> and then run disk test:
>
> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test *--filename=test* --bs=4k --iodepth=64 --size=4G
> --readwrite=randrw --rwmixread=75
>
> then *keep repeating *but *change the filename* attribute so it
> does not use the same blocks over and over again.
&g...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it