Displaying 10 results from an estimated 10 matches for "gtod_reduce".
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you.
With performance.write-behind-trickling-writes ON (default):
## 4k randwrite
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=17.3MiB/s][r=0,w=4422 IOPS][eta 00m:...
2018 Mar 20
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod <mailinglists at smcleod.net>
wrote:
> Hi Raghavendra,
>
>
> On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa <rgowdapp at redhat.com>
> wrote:
>
> Aggregating large number of small writes by write-behind into large writes
> has been merged on master:
> https://github.com/gluster/glusterfs/issues/364
>
>
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
...ice nvme,drive=D22,serial=1234
Here is the test results:
local NVMe: 860MB/s
qemu-nvme: 108MB/s
qemu-nvme+google-ext: 140MB/s
qemu-nvme-google-ext+eventfd: 190MB/s
root at wheezy:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=60
time_based
norandommap
group_reporting
gtod_reduce=1
numjobs=8
[job1]
filename=/dev/nvme0n1
rw=read
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...vda disc with LVM (i use defaults
in anaconda, but remove the large /home to prevent SSD beeing over used).
After install and reboot log in to VM and
yum install epel-release -y && yum install screen fio htop -y
and then run disk test:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
--name=test *--filename=test* --bs=4k --iodepth=64 --size=4G
--readwrite=randrw --rwmixread=75
then *keep repeating *but *change the filename* attribute so it does not
use the same blocks over and over again.
In the beginning the performance is great!! Wow, very impressive 150MB/s
4k random...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
...t; anaconda, but remove the large /home to prevent SSD beeing over used).
>
> After install and reboot log in to VM and
>
> yum install epel-release -y && yum install screen fio htop -y
>
> and then run disk test:
>
> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test *--filename=test* --bs=4k --iodepth=64 --size=4G
> --readwrite=randrw --rwmixread=75
>
> then *keep repeating *but *change the filename* attribute so it does not
> use the same blocks over and over again.
>
> In the beginning the performance is great!! Wow, very...
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...he large /home to prevent SSD
> beeing over used).
>
> After install and reboot log in to VM and
>
> yum install epel-release -y && yum install screen fio htop -y
>
> and then run disk test:
>
> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
> --name=test *--filename=test* --bs=4k --iodepth=64 --size=4G
> --readwrite=randrw --rwmixread=75
>
> then *keep repeating *but *change the filename* attribute so it
> does not use the same blocks over and over again.
>
> In the beginning the performance...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
...ce.
qemu-nvme: 29MB/s
qemu-nvme+google-ext: 100MB/s
virtio-blk: 174MB/s
virtio-scsi: 118MB/s
I'll show you qemu-vhost-nvme+google-ext number later.
root at guest:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=120
time_based
rw=randread
norandommap
group_reporting
gtod_reduce=1
numjobs=2
[job1]
filename=/dev/nvme0n1
#filename=/dev/vdb
#filename=/dev/sda
rw=read
Patches also available at:
kernel:
https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
qemu:
http://www.minggr.net/cgit/cgit.cgi/qemu/log/?h=nvme-google-ext
Thanks,
Ming
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
...ce.
qemu-nvme: 29MB/s
qemu-nvme+google-ext: 100MB/s
virtio-blk: 174MB/s
virtio-scsi: 118MB/s
I'll show you qemu-vhost-nvme+google-ext number later.
root at guest:~# cat test.job
[global]
bs=4k
ioengine=libaio
iodepth=64
direct=1
runtime=120
time_based
rw=randread
norandommap
group_reporting
gtod_reduce=1
numjobs=2
[job1]
filename=/dev/nvme0n1
#filename=/dev/vdb
#filename=/dev/sda
rw=read
Patches also available at:
kernel:
https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
qemu:
http://www.minggr.net/cgit/cgit.cgi/qemu/log/?h=nvme-google-ext
Thanks,
Ming