Hi all,
I've got a:
10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC
2014 root at releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64
hw.machine: amd64
hw.model: Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz
hw.ncpu: 12
hw.machine_arch: amd64
With 2 Intel P3600
nvmecontrol devlist
nvme0: INTEL SSDPE2ME020T4
nvme0ns1 (1907729MB)
nvme1: INTEL SSDPE2ME020T4
nvme1ns1 (1907729MB)
That i've put in a mirror or single drive zfs pool. Issue that i'm
having is that performance is very poor on the P3600 drives, they are
really close to a S3500 which is really poor:
Here is a iozone benchmark on a single drive pool with recordsize set to
4k and compression to lz4
Command line used: iozone -Rb /root/output.wks -O -i 0 -i 1 -i 2
-e -+n -r4K -r 8K -r 32K -r 64K -r 128K -s 1G
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
1048576 4 74609 0 104268 0 95699 49975
1048576 8 36554 0 62927 0 59419 25778
1048576 32 9869 0 19148 0 18757 7134
1048576 64 5014 0 9612 0 9528 3813
1048576 128 2586 0 4908 0 4883 1962
While on S3500 i have:
Command line used: iozone -Rb /root/output_nexenta.wks -O -i 0
-i 1 -i 2 -e -+n -r4K -r 8K -r 32K -r 64K -r 128K -s 1G
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
1048576 4 66215 0 204121 0 162069 35408
1048576 8 54523 0 168679 0 137393 29711
1048576 32 10293 0 80652 0 75063 10462
1048576 64 23065 0 49044 0 46684 20179
1048576 128 16755 0 25715 0 25240 16125
Settings that i have apart default are:
cat /boot/loader.conf
zfs_load="YES"
kern.geom.label.gptid.enable="0"
nvme_load="YES"
nvd_load="YES"
vfs.zfs.vdev.trim_on_init=0
vfs.zfs.trim.enabled=0
kern.ipc.nmbjumbo16=262144
kern.ipc.nmbjumbo9=262144
kern.ipc.nmbclusters=262144
kern.ipc.nmbjumbop=262144
net.inet.tcp.maxtcptw=163840
hw.intr_storm_threshold="9000"
vfs.zfs.cache_flush_disable=1 #avoid sending flushes to prevent useless
delays in buggy low end ssds
vfs.zfs.vdev.cache.bshift=13
net.inet.tcp.tcbhashsize=32768
vfs.zfs.arc_max=34359738368
/etc/sysctl.conf
net.inet.tcp.fast_finwait2_recycle=1
net.inet.ip.portrange.randomized=0
net.inet.ip.portrange.first=1024
net.inet.ip.portrange.last=65535
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_inc=65536
vfs.zfs.vdev.trim_on_init=0
#close time_wait connections at 2*7500ms
net.inet.tcp.msl=7500
kern.ipc.somaxconn=4096
net.inet.icmp.icmplim=2000
#zfs
vfs.zfs.txg.timeout=5
vfs.zfs.prefetch_disable=1
What i've tried is setting hw.nvme.per_cpu_io_queues to 0, but doesn't
seem to have any effect.
Setting hw.nvme.force_intx=1 leads to system not booting at all.
From what it seems issues is on nvme driver as perftest outputs:
nvmecontrol perftest -n 32 -o read -s 512 -t30 nvme0ns1
Threads: 32 Size: 512 READ Time: 30 IO/s: 270212 MB/s: 131
nvmecontrol perftest -n 32 -o write -s 512 -t30 nvme0ns1
Threads: 32 Size: 512 WRITE Time: 30 IO/s: 13658 MB/s: 6
Any help to bring this device to proper speed will be welcomed.
--
Best regards,
Vintila Mihai Alexandru