Displaying 8 results from an estimated 8 matches for "siebenmann".
Did you mean:
liebermann
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment
with the backend storage being iSCSI-based, in part because of the
possibilities for failover. In exploring things in our test environment,
I have noticed that ''zpool import'' takes a fairly long time; about
35 to 45 seconds per pool. A pool import time this slow obviously
has implications for how fast
2008 Apr 04
10
ZFS and multipath with iSCSI
We''re currently designing a ZFS fileserver environment with iSCSI based
storage (for failover, cost, ease of expansion, and so on). As part of
this we would like to use multipathing for extra reliability, and I am
not sure how we want to configure it.
Our iSCSI backend only supports multiple sessions per target, not
multiple connections per session (and my understanding is that the
2009 Nov 20
0
hung pool on iscsi
...eantime. Is someone out
there handy enough with the undocumented stuff to recommend a zdb
command or something that will pound the delinquent pool into submission
without crashing everything? Surely there''s a pool hard-reset command
somewhere for the QA guys, right?
thx
jake
Chris Siebenmann wrote:
> You write:
> | Now I''d asked about this some months ago, but didn''t get an answer so
> | forgive me for asking again: What''s the difference between wait and
> | continue in my scenario? Will this allow the one faulted pool to fully
> | fail and...
2008 Jul 31
9
Terrible zfs performance under NFS load
Hello,
We have a S10U5 server sharing with zfs sharing up NFS shares. While using the nfs mount for a log destination for syslog for 20 or so busy mail servers we have noticed that the throughput becomes severly degraded shortly. I have tried disabling the zil, turning off cache flushing and I have not seen any changes in performance. The servers are only pushing about 1MB/s of constant
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2008 Jun 05
0
Tracking down the causes of a mysteriously shrinking ARC cache?
I have a test Solaris machine with 8 GB of memory. When freshly booted,
the ARC consumes 5 GB (and I would be happy to make it consume more)
and file-level prefetching works great even when I hit the machine with
a lot of simultaneous sequential reads. But overnight, the ARC has
shrunk to 2 GB (as reported by arcstat.pl) and file-level prefetching
is (as expected at that level) absolutely
2009 Aug 27
0
How are you supposed to remove faulted spares from pools?
We have a situation where all of the spares in a set of pools have
gone into a faulted state and now, apparently, we can''t remove them
or otherwise de-fault them. I''m confidant that the underlying disks
are fine, but ZFS seems quite unwilling to do anything with the spares
situation.
(The specific faulted state is ''FAULTED corrupted data'' in
''zpool
2008 Nov 25
2
Can a zpool cachefile be copied between systems?
Suppose that you have a SAN environment with a lot of LUNs. In the
normal course of events this means that ''zpool import'' is very slow,
because it has to probe all of the LUNs all of the time.
In S10U6, the theoretical ''obvious'' way to get around this for your
SAN filesystems seems to be to use a non-default cachefile (likely one
cachefile per virtual