Displaying 4 results from an estimated 4 matches for "optan".
Did you mean:
optin
2018 Aug 08
1
Windows Guest I/O performance issues (already using virtio)
...FI booting the windows guest and defining “<blockio logical_block_size='4096' physical_block_size='4096'/>”? Perhaps better block size consistency through to the zvol?
Here is my setup:
48 core Haswell CPU
192G Ram
Linux 4.14.61 or 4.9.114 (testing both)
ZFS file system on optane SSD drive or ZFS file system on dumb HBA with 8 spindles of 15k disks (testing both)
4k block size zvol for virtual machines
32G arc cache
Here is my VM:
<domain type='kvm' id='12'>
<name>testvm</name>
<memory unit='KiB'>33554432</memory>...
2017 Oct 13
1
small files performance
...s writes on Gluster.
> >> The read performance has been improved in many ways in recent releases
> >> (md-cache, parallel-readdir, hot-tier).
> >> But write performance is more or less the same and you cannot go above
> >> 10K smallfiles create - even with SSD or Optane drives.
> >> Even ramdisk is not helping much here, because the bottleneck is not
> >> in the storage performance.
> >> Key problems I've noticed:
> >> - LOOKUPs are expensive, because there is separate query for every
> >> depth level of destinatio...
2018 Aug 09
0
Re: Windows Guest I/O performance issues (already using virtio) (Matt Schumacher)
...defining ?<blockio logical_block_size='4096' physical_block_size='4096'/>?? Perhaps better block size consistency through to the zvol?
>
>
>Here is my setup:
>
>48 core Haswell CPU
>192G Ram
>Linux 4.14.61 or 4.9.114 (testing both)
>ZFS file system on optane SSD drive or ZFS file system on dumb HBA with 8 spindles of 15k disks (testing both)
>4k block size zvol for virtual machines
>32G arc cache
>
>Here is my VM:
>
><domain type='kvm' id='12'>
> <name>testvm</name>
> <memory unit='KiB...
2017 Sep 08
3
cyrus spool on btrfs?
On 09/08/2017 01:31 PM, hw wrote:
> Mark Haney wrote:
>
> I/O is not heavy in that sense, that?s why I said that?s not the
> application.
> There is I/O which, as tests have shown, benefits greatly from low
> latency, which
> is where the idea to use SSDs for the relevant data has arisen from.?
> This I/O
> only involves a small amount of data and is not sustained