Hi
I have installed Nexenta Community Edition as a virtual storage appliance in my
home lab server. The server is running VMware ESXi 4.1 on an HP ML110 G5 server
with 2x1TB SATA disks and a 60GB SSD. All three disks are configured as
datastores which hold the VM disks.
The server also has a Solaris 11 Express VM which has two virtual disks - one
SATA, the second SSD - both with ZFS installed, but no additional L2ARC or ZIL
devices. Running the bonnie++ benchmark, I have observed the following:
Running the Bonnie++ test on the SATA disk resulted in the following output:
* Sequential Block Reads: 69199K/sec (67.5MB/sec)
* Sequential Block Writes: 64105K/sec (62.6MB/sec)
* Rewrite: 31138K/sec (30.4MB/sec)
* Random Seeks: 246/sec
Running the Bonnie++ test on the SSD disk resulted in the following output:
* Sequential Block Reads: 183511K/sec (179MB/sec)
* Sequential Block Writes: 109536K/sec (106MB/sec)
* Rewrite: 75712K/sec (73MB/sec)
* Random Seeks: 2450/sec
The bonnie++ test operates on a data size of 4GB which is bigger than the RAM
in either VM (to prevent false results being generated because the data is
cached in memory).
I created a SATA-based, mirrored share on the Nexenta VM and mounted it to the
Solaris 11 Express VM over NFSv3. Performance is significantly lower:
* Sequential Block Reads: 55699K/sec (54MB/sec)
* Sequential Block Writes: 24167K/sec (23MB/sec)
* Rewrite: 15669K/sec (15MB/sec)
* Random Seeks: 285.4/sec
I expected NFS (and the rest of network stack) to add an overhead, but the
drop in writes from 62.6MB/sec to 23MB/sec seems dramatic, and the Rewrite
performance is halved.
Thinking I might be able to get better read performance, I added a 20GB SSD
volume to the Nexenta VM as a L2ARC and re-ran the test:
* Sequential Block Reads: 52301K/sec (51MB/sec)
* Sequential Block Writes: 24483K/sec (23MB/sec)
* Rewrite: 16278K/sec (15MB/sec)
* Random Seeks: 647/sec
Random seeks were up, but no change in sequential block reads.
Thinking it was possibly NFS waiting on filesystem syncs, I added a 2GB SSD
volume to the Nexenta VM as a ZIL and re-ran the test:
* Sequential Block Reads: 57477K/sec (56MB/sec)
* Sequential Block Writes: 22592K/sec (22MB/sec)
* Rewrite: 13862K/sec (13MB/sec)
* Random Seeks: 639/sec
No significant benefit in using either a L2ARC or ZIL.
So, my questions:
1) Can anything be done to tune the NFS stack to make it better performing?
2) Are they ways to see if the L2ARC or ZIL are being utilised (and how
effectively)?
The Solaris 11 Express host and the Nexenta CE VMs are both connected to the
same internal vSwitch, so should operate at "bus speed" (i.e., faster
than
1Gbit). I''ve run a network test between the two VMs (using the ttcp
utility)
and get ~ 145MB/sec network throughput.
Any help or pointers appreciated!
JR
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110111/c15a9ca6/attachment.html>
Christopher George
2011-Jan-11 15:35 UTC
[zfs-discuss] ZFS/NFS benchmarking - is this normal?
> So, my questions: > ... > 2) Are they ways to see if the L2ARC or ZIL are being utilised (and > how effectively)?Richard Elling has an excellent dtrace script (zilstat) to exactly answer how much activity (synchronous writes) the ZIL encounters. See link: http://www.richardelling.com/Home/scripts-and-programs-1/zilstat Best regards, Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org