LEES, Cooper
2010-Nov-16  22:01 UTC
[zfs-discuss] Adding Sun Flash Accelerator F20''s into a Zpool for Optimal Performance [SEC=UNCLASSIFIED]
Zfs Gods,
I have been approved to buy 2 x F20 PCIe cards for my x4540 to increase our
IOPs and I was wondering what would be the most benefit to gain extra IOPs
(both reading and writing) on my zpool.
Currently I have to following storage zpool, called cesspool:
  pool: cesspool
 state: ONLINE
 scrub: scrub completed after 14h0m with 0 errors on Sat Nov 13 18:11:29
2010
config:
        NAME           STATE     READ WRITE CKSUM
        cesspool       ONLINE       0     0     0
          raidz2-0     ONLINE       0     0     0
            c10t0d0p0  ONLINE       0     0     0
            c11t0d0p0  ONLINE       0     0     0
            c12t0d0p0  ONLINE       0     0     0
            c13t0d0p0  ONLINE       0     0     0
            c8t1d0p0   ONLINE       0     0     0
            c9t1d0p0   ONLINE       0     0     0
            c10t1d0p0  ONLINE       0     0     0
            c11t1d0p0  ONLINE       0     0     0
            c12t1d0p0  ONLINE       0     0     0
            c13t1d0p0  ONLINE       0     0     0
            c8t2d0p0   ONLINE       0     0     0
          raidz2-1     ONLINE       0     0     0
            c9t2d0p0   ONLINE       0     0     0
            c10t2d0p0  ONLINE       0     0     0
            c11t2d0p0  ONLINE       0     0     0
            c12t2d0p0  ONLINE       0     0     0
            c13t2d0p0  ONLINE       0     0     0
            c8t3d0p0   ONLINE       0     0     0
            c9t3d0p0   ONLINE       0     0     0
            c10t3d0p0  ONLINE       0     0     0
            c11t3d0p0  ONLINE       0     0     0
            c12t3d0p0  ONLINE       0     0     0
            c13t3d0p0  ONLINE       0     0     0
          raidz2-2     ONLINE       0     0     0
            c8t4d0p0   ONLINE       0     0     0
            c9t4d0p0   ONLINE       0     0     0
            c10t4d0p0  ONLINE       0     0     0
            c11t4d0p0  ONLINE       0     0     0
            c12t4d0p0  ONLINE       0     0     0
            c13t4d0p0  ONLINE       0     0     0
            c8t5d0p0   ONLINE       0     0     0
            c9t5d0p0   ONLINE       0     0     0
            c10t5d0p0  ONLINE       0     0     0
            c11t5d0p0  ONLINE       0     0     0
            c12t5d0p0  ONLINE       0     0     0
          raidz2-3     ONLINE       0     0     0
            c13t5d0p0  ONLINE       0     0     0
            c8t6d0p0   ONLINE       0     0     0
            c9t6d0p0   ONLINE       0     0     0
            c10t6d0p0  ONLINE       0     0     0
            c11t6d0p0  ONLINE       0     0     0
            c12t7d0p0  ONLINE       0     0     0
            c13t6d0p0  ONLINE       0     0     0
            c8t7d0p0   ONLINE       0     0     0
            c9t7d0p0   ONLINE       0     0     0
            c10t7d0p0  ONLINE       0     0     0
            c11t7d0p0  ONLINE       0     0     0
        spares
          c12t6d0p0    AVAIL
          c13t7d0p0    AVAIL
As you would imagine with that setup, it?s IOPs are nothing to write home
about. No slogging or cache devices. If I get 2 x F20 PCIe cards, how would
you recommend I use them for most benefit? I was thinking paritioning the
two drives that show up form the F20, have a mirrored slogging zpool (named
slogger) and use the other two vdevs to go into cesspool as cache devices.
Or am I better to use 1 device soley for slogging and one for a cache devce
in my pool (cesspool) ... Cause if I loose the cache device the pool still
operates but slows back down? (Am I correct there?)
I am getting an outage on the prod system in December, but I will test them
in my backup x4500 (if I can) before the cut over on the prod system. I will
also be looking to go to the latest firmware and possibly (depending on
costs ? awaiting quote from our Sun/Oracle supplier) Solaris 11 Express ...
Thanks, will appreciate any thoughts.
--
Cooper Ry Lees
HPC / UNIX Systems Administrator - Information Management Services (IMS)
Australian Nuclear Science and Technology Organisation
T  +61 2 9717 3853
F  +61 2 9717 9273
M  +61 403 739 446
E  cooper.lees at ansto.gov.au
www.ansto.gov.au <http://www.ansto.gov.au>
Important: This transmission is intended only for the use of the addressee.
It is confidential and may contain privileged information or copyright
material. If you are not the intended recipient, any use or further
disclosure of this communication is strictly forbidden. If you have received
this transmission in error, please notify me immediately by telephone and
delete all copies of this transmission as well as any attachments.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101117/a5a703a4/attachment.html>
Bob Friesenhahn
2010-Nov-17  01:35 UTC
[zfs-discuss] Adding Sun Flash Accelerator F20''s into a Zpool for Optimal Performance [SEC=UNCLASSIFIED]
On Wed, 17 Nov 2010, LEES, Cooper wrote:> Zfs Gods, > > I have been approved to buy 2 x F20 PCIe cards for my x4540 to > increase our IOPs and I was wondering what would be the most benefit > to gain extra IOPs (both reading and writing) on my zpool.To clarify, adding a dedicated intent log (slog) only improves apparent IOPS for synchronous writes such as via NFS or a database. It will not help async writes at all unless they are contending with sync writes. A l2arc device will help with read IOPS quite a lot provided that the working set is larger than system RAM yet smaller than the l2arc device. If the working set is still much larger than RAM plus l2arc devices, then read performance may still be bottlenecked by disk. Take care not to trade IOPS gains for a data rate throughput loss. Sometimes cache devices offer less throughput than main store. There is little doubt that your pool would support more IOPS if it was based on more vdevs, containing fewer drives each. I doubt that anyone here can adequately answer your question without measurement data from the system taken while it is under the expected load. Useful tools for producing data to look at are the zilstat.ksh and arc_summary.pl scripts which you should find mentioned in the list archives. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/