Ray,
Here is my short list of Performance Metrics I track on 7410 Performance
Rigs via 7000 Analytics.
Cheers,
Joel.
m:analytics datasets> ls
Datasets:
DATASET STATE INCORE ONDISK NAME
dataset-000 active 1016K 75.9M arc.accesses[hit/miss]
dataset-001 active 390K 37.9M arc.l2_accesses[hit/miss]
dataset-002 active 242K 13.7M arc.l2_size
dataset-003 active 242K 13.7M arc.size
dataset-004 active 958K 86.1M arc.size[component]
dataset-005 active 242K 13.7M cpu.utilization
dataset-006 active 477K 46.2M cpu.utilization[mode]
dataset-007 active 648K 59.7M dnlc.accesses[hit/miss]
dataset-008 active 242K 13.7M fc.bytes
dataset-009 active 242K 13.7M fc.ops
dataset-010 active 242K 12.8M fc.ops[latency]
dataset-011 active 242K 12.8M fc.ops[op]
dataset-012 active 242K 13.7M ftp.kilobytes
dataset-013 active 242K 12.8M ftp.kilobytes[op]
dataset-014 active 242K 13.7M http.reqs
dataset-015 active 242K 12.8M http.reqs[latency]
dataset-016 active 242K 12.8M http.reqs[op]
dataset-017 active 242K 13.7M io.bytes
dataset-018 active 439K 43.7M io.bytes[op]
dataset-019 active 308K 29.6M io.disks[utilization=95][disk]
dataset-020 active 2.93M 87.2M io.disks[utilization]
dataset-021 active 242K 13.7M io.ops
dataset-022 active 9.85M 274M io.ops[disk]
dataset-023 active 20.0M 827M io.ops[latency]
dataset-024 active 438K 43.6M io.ops[op]
dataset-025 active 242K 13.7M iscsi.bytes
dataset-026 active 242K 13.7M iscsi.ops
dataset-027 active 1.45M 91.1M iscsi.ops[latency]
dataset-028 active 248K 14.8M iscsi.ops[op]
dataset-029 active 242K 13.7M ndmp.diskkb
dataset-030 active 242K 13.8M nfs2.ops
dataset-031 active 242K 12.8M nfs2.ops[latency]
dataset-032 active 242K 13.8M nfs2.ops[op]
dataset-033 active 242K 13.8M nfs3.ops
dataset-034 active 8.82M 163M nfs3.ops[latency]
dataset-035 active 327K 18.1M nfs3.ops[op]
dataset-036 active 242K 13.8M nfs4.ops
dataset-037 active 2.31M 97.8M nfs4.ops[latency]
dataset-038 active 311K 17.2M nfs4.ops[op]
dataset-039 active 242K 13.7M nic.kilobytes
dataset-040 active 970K 84.5M nic.kilobytes[device]
dataset-041 active 943K 77.1M nic.kilobytes[direction=in][device]
dataset-042 active 457K 31.1M nic.kilobytes[direction=out][device]
dataset-043 active 503K 49.1M nic.kilobytes[direction]
dataset-044 active 242K 13.7M sftp.kilobytes
dataset-045 active 242K 12.8M sftp.kilobytes[op]
dataset-046 active 242K 13.7M smb.ops
dataset-047 active 242K 12.8M smb.ops[latency]
dataset-048 active 242K 13.7M smb.ops[op]
dataset-049 active 242K 12.8M srp.bytes
dataset-050 active 242K 12.8M srp.ops[latency]
dataset-051 active 242K 12.8M srp.ops[op]
Cheers,
Joel.
On 04/08/10 14:06, Ray Van Dolson wrote:> We''re starting to grow our ZFS environment and really need to
start
> standardizing our monitoring procedures.
>
> OS tools are great for spot troubleshooting and sar can be used for
> some trending, but we''d really like to tie this into an SNMP based
> system that can generate graphs for us (via RRD or other).
>
> Whether or not we do this via our standard enterprise monitoring tool
> or write some custom scripts I don''t really care... but I do have
the
> following questions:
>
> - What metrics are you guys tracking? I''m thinking:
> - IOPS
> - ZIL statistics
> - L2ARC hit ratio
> - Throughput
> - "IO Wait" (I know there''s probably a better
term here)
>
Utilize "Latency" instead of "IO
Wait".> - How do you gather this information? Some but not all is
> available via SNMP. Has anyone written a ZFS specific MIB or
> plugin to make the info available via the standard Solaris SNMP
> daemon? What information is available only via zdb/mdb?
>
On 7000 appliances, this is easy via Analytics.
On Solaris, you need to pull data from kstats and/or DTrace scripts and then
archive the data in similar manner...
> - Anyone have any RRD-based setups for monitoring their ZFS
> environments they''d be willing to share or talk about?
>
> Thanks in advance,
> Ray
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
<http://www.oracle.com/>Joel Buckley | +1.303.272.5556
Oracle Open Storage Systems
500 Eldorado Blvd
Broomfield, CO 80021-3400
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100408/55d2c4fa/attachment.html>