Displaying 20 results from an estimated 1000 matches similar to: "Performance with Sun StorageTek 2540"
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest
recommended version and am seeing radically difference performance
when testing with iozone than I did in February of 2008. I am using
Solaris 10 U5 with all the latest patches.
This is the performance achieved (on a 32GB file) in February last
year:
KB reclen write rewrite read reread
33554432
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ...
I''m actually speaking of hardware :)
ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks.
I want to
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron.
Zpool scrub runs fine from the command line, no errors.
The freeze happens within 30 seconds of the zpool scrub happening.
The one core dump I succeeded in taking showed an arccache eating up
all the ram.
The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s
been patched and seems to have
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to
verify the integrity of that datastream without doing a ?zfs receive? and
occupying all that disk space?
I am aware that ?zfs send? is not a backup solution, due to vulnerability of
even a single bit error, and lack of granularity, and other reasons.
However ... There is an attraction to ?zfs send? as an augmentation to the
2008 Sep 10
7
Intel M-series SSD
Interesting flash technology overview and SSD review here:
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403
and another review here:
http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html
Regards,
--
Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com
Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005
2010 Apr 10
41
Secure delete?
Hi all
Is it possible to securely delete a file from a zfs dataset/zpool once it''s been snapshotted, meaning "delete (and perhaps overwrite) all copies of this file"?
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ
2009 Jul 24
6
When writing to SLOG at full speed all disk IO is blocked
Hello all...
I''m seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I''m aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks).
But i did not find anything about 100% slog activity (~115MB/s) blocks
2008 May 21
11
Per-user home filesystems and OS-X Leopard anomaly
I encountered an issue that people using OS-X systems as NFS clients
need to be aware of. While not strictly a ZFS issue, it may be
encounted most often by ZFS users since ZFS makes it easy to support
and export per-user filesystems. The problem I encountered was when
using ZFS to create exported per-user filesystems and the OS-X
automounter to perform the necessary mount magic.
OS-X
2008 Apr 28
5
ZFS - Implementation Successes and Failures
Hi
Firstly apologies for the spam if you got this email via multiple aliases.
I''m trying to document a number of common scenarios where ZFS is used as
part of the solution such as email server, $homeserver, RDBMS and so forth
but taken from real implementations where things worked and equally
importantly threw up things that needed to be avoided (even if that was the
whole of ZFS!).
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks.
I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello,
We have a new Thor here with 24TB of disk in (first of many, hopefully).
We are trying to determine the bext practices with respect to file system
management and sizing. Previously, we have tried to keep each file system
to a max size of 500GB to make sure we could fit it all on a single tape,
and to minimise restore times and impact should we experience some kind of
volume
2009 Dec 08
1
Live Upgrade Solaris 10 UFS to ZFS boot pre-requisites?
I have a Solaris 10 U5 system massively patched so that it supports
ZFS pool version 15 (similar to U8, kernel Generic_141445-09), live
upgrade components have been updated to Solaris 10 U8 versions from
the DVD, and GRUB has been updated to support redundant menus across
the UFS boot environments.
I have studied the Solaris 10 Live Upgrade manual (821-0438) and am
unable to find any
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi,
I don''t know if it''s already been discussed here, but while
thinking about using the OCZ Vertex 2 Pro SSD (which according
to spec page has supercaps built in) as a shared slog and L2ARC
device it stroke me that this might not be a such a good idea.
Because this SSD is MLC based, write cycles are an issue here,
though I can''t find any number in their spec.
Why do I
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2009 Jun 15
33
compression at zfs filesystem creation
Hi,
I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump?
Thanks,
~~sa
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn''t have a supercap so lets'' say dataloss
occurs....is it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the numbers i''m seeing are really nice....these are some nfs tar times
before
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list,
as this matter pops up every now and then in posts on this list I just
want to clarify that the real performance of RaidZ (in its current
implementation) is NOT anything that follows from raidz-style data
efficient redundancy or the copy-on-write design used in ZFS.
In a M-Way mirrored setup of N disks you get the write performance of
the worst disk and a read performance that is
2009 Jan 09
24
zfs root, jumpstart and flash archives
I understand that currently, at least under Solaris 10u6, it is not
possible to jumpstart a new system with a zfs root using a flash archive
as a source.
Can anyone comment as to whether this restriction will pass in the near
term, or if this is a while out (6+ months) before this will be possible?
Thanks,
Jerry