Actually seems that i've was wrong vfs.zfs.top_maxinflight does not have
any influence and this is confirmed by the nvmecontrol perftests.
nvmecontrol perftest -n 32 -o read -s 512 -t30 nvme0ns1
Threads: 32 Size: 512 READ Time: 30 IO/s: 270212 MB/s: 131
nvmecontrol perftest -n 32 -o write -s 512 -t30 nvme0ns1
Threads: 32 Size: 512 WRITE Time: 30 IO/s: 13658 MB/s: 6
I was able to recover from the errors from previous message. They were
cause by the fact that i've commented hw.nvme.per_cpu_io_queues=0 in
loader.conf . After setting it back things got back to "slow normal"
Performance is half of what it should be and issue seems to be the nvme
driver. I'll try it on another OS to confirm it's not a hardware setting
issue.
Best regards,
Vintila Mihai Alexandru
On 1/16/2015 10:12 AM, Mihai Vintila wrote:> I've tried to increase it and obtained around 90k iops for write, but
> now i only get :
>
> nvme0: WRITE sqid:6 cid:125 nsid:1 lba:3907028496 len:16
> nvme0: WRITE FAULTS (02/80) sqid:6 cid:125 cdw0:0
> nvme0: async event occurred (log page id=0x1)
> nvme0: WRITE sqid:6 cid:126 nsid:1 lba:528 len:16
> nvme0: WRITE FAULTS (02/80) sqid:6 cid:126 cdw0:0
> nvme0: async event occurred (log page id=0x1)
> nvme0: WRITE sqid:6 cid:127 nsid:1 lba:3907027984 len:16
> nvme0: WRITE FAULTS (02/80) sqid:6 cid:127 cdw0:0
> nvme0: async event occurred (log page id=0x1)
>
> Best regards,
> Vintila Mihai Alexandru
>
> On 1/15/2015 7:59 PM, Slawa Olhovchenkov wrote:
>> On Thu, Jan 15, 2015 at 07:22:49PM +0200, Mihai Vintila wrote:
>>
>>> /etc/sysctl.conf
>>>
>>> net.inet.tcp.fast_finwait2_recycle=1
>>> net.inet.ip.portrange.randomized=0
>>> net.inet.ip.portrange.first=1024
>>> net.inet.ip.portrange.last=65535
>>> net.inet.tcp.recvbuf_max=16777216
>>> net.inet.tcp.sendbuf_max=16777216
>>> net.inet.tcp.recvbuf_inc=65536
>>> vfs.zfs.vdev.trim_on_init=0
>>> #close time_wait connections at 2*7500ms
>>> net.inet.tcp.msl=7500
>>> kern.ipc.somaxconn=4096
>>> net.inet.icmp.icmplim=2000
>>>
>>> #zfs
>>> vfs.zfs.txg.timeout=5
>>> vfs.zfs.prefetch_disable=1
>>
>>> Any help to bring this device to proper speed will be welcomed.
>> Do you try to increase vfs.zfs.top_maxinflight?
>