similar to: How are you supposed to remove faulted spares from pools?

Displaying 20 results from an estimated 6000 matches similar to: "How are you supposed to remove faulted spares from pools?"

2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment with the backend storage being iSCSI-based, in part because of the possibilities for failover. In exploring things in our test environment, I have noticed that ''zpool import'' takes a fairly long time; about 35 to 45 seconds per pool. A pool import time this slow obviously has implications for how fast
2009 Nov 20
0
hung pool on iscsi
Hi, Can anyone identify whether this is a known issue (perhaps 6667208) and if the fix is going to be pushed out to Solaris 10 anytime soon? I''m getting badly beaten up over this weekly, essentially anytime we drop a packet between our twenty-odd iscsi-backed zones and the filer. Chris was kind enough to provide his synopsis here (thanks Chris):
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi, We are seeing more long delays in zpool import, say, 4~5 or even 25~30 minutes, especially when backup jobs are going on in the FC SAN the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array, some pools takes a few seconds, but minutes for some. the pattern seems random to me so far. It''s first noticed soon after being upgraded to Solaris 10 U6
2011 May 19
2
Faulted Pool Question
I just got a call from another of our admins, as I am the resident ZFS expert, and they have opened a support case with Oracle, but I figured I''d ask here as well, as this forum often provides better, faster answers :-) We have a server (M4000) with 6 FC attached SE-3511 disk arrays (some behind a 6920 DSP engine). There are many LUNs, all about 500 GB and mirrored via ZFS. The LUNs
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some
2008 Nov 25
2
Can a zpool cachefile be copied between systems?
Suppose that you have a SAN environment with a lot of LUNs. In the normal course of events this means that ''zpool import'' is very slow, because it has to probe all of the LUNs all of the time. In S10U6, the theoretical ''obvious'' way to get around this for your SAN filesystems seems to be to use a non-default cachefile (likely one cachefile per virtual
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2009 Feb 05
0
zpool import
Even Though zfs got the snapshot and send/recv options for backup/replication thing, we were testing the metro-mirror using zfs on a u6 system with a duplicate set of of IBM SVC luns. I had the SAN backend sync done while zpool ZA was exported on system A. import of pool ZA was a flawless and imported using the same set of LUNS. zpool ignored the duplicate copy of pool ZA existing on a another
2008 Apr 04
10
ZFS and multipath with iSCSI
We''re currently designing a ZFS fileserver environment with iSCSI based storage (for failover, cost, ease of expansion, and so on). As part of this we would like to use multipathing for extra reliability, and I am not sure how we want to configure it. Our iSCSI backend only supports multiple sessions per target, not multiple connections per session (and my understanding is that the
2010 Mar 02
4
ZFS Large scale deployment model
We have a virtualized environment of T-Series where each host has either zones or LDoms. All of the virtual systems will have their own dedicated storage on ZFS (and some may also get raw LUNs). All the SAN storage is delivered in fixed sized 33GB LUNs. The question I have to the community is whether it would be better to have a pool per virtual system, or create a large pool and carve out ZFS
2006 Nov 01
0
RAID-Z1 pool became faulted when a disk was removed.
So I have attached to my system two 7-disk SCSI arrays, each of 18.2 GB disks. Each of them is a RAID-Z1 zpool. I had a disk I thought was a dud, so I pulled the fifth disk in my array and put the dud in. Sure enough, Solaris started spitting errors like there was no tomorrow in dmesg, and wouldn''t use the disk. Ah well. Remove it, put the original back in - hey, Solaris still thinks
2008 Sep 16
1
Interesting Pool Import Failure
Hello... Since there has been much discussion about zpool import failures resulting in loss of an entire pool, I thought I would illustrate a scenario I just went through to recover a faulted pool that wouldn''t import under Solaris 10 U5. While this is a simple scenario, and the data was not terribly important, I think the exercise should at least give some piece of mind to those who
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
Hi, more than a year ago I created a mirrored ZFS-Pool consiting of 2x1TB HDDs using the OSX 10.5 ZFS Kernel Extension (Zpool Version 8, ZFS Version 2). Everything went fine and I used the pool to store personal stuff on it, like lots of photos and music. (So getting the data back is not time critical, but still important to me.) Later, since the development of the ZFS extension was
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2010 Oct 05
0
Long import due to spares.
Just for history as to why Fishworks was running on this box...we were in the beta program and have upgraded along the way. This box is an X4240 with 16x 146GB disks running the Feb 2010 release of FW with de-dupe. We were getting ready to re-purpose the box and getting our data off. We then deleted a filesystem that was using de-duplication and the box suddenly went into a freeze and the pool
2010 Jun 09
1
Multipath pools - practical for use?
Hi all, Writing up some Fedora documentation, and looking to figure out the best way of mapping multipath network(!) LUNs to pools in libvirt. ie Infiniband SRP LUNs, but would probably apply to Fibre Channel equally as well. There are two approaches I can think of easily: a) Large LUNs (ie TB+) that are mapped to a host server as disk, with each LUN being configured as an LVM
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2009 Mar 26
0
warm spares on a 4540
Hi, In the doc: http://docs.sun.com/source/819-4359-15/CH3-maint.html#50495545_22785 of the 4540 it mentions you can have warm spares. Is there any zfs setting that labels a drive as a warm spare? Or does this not matter if you use the zpool autoreplace property? Thanks in advance, ~~sa -- ---------------- Shannon A. Fiume System Administrator shannon dot fiume at sun dot com
2009 Nov 02
2
How do I protect my zfs pools?
Hi, I may have lost my first zpool, due to ... well, we''re not yet sure. The ''zpool import tank'' causes a panic -- one which I''m not even able to capture via savecore. I''m glad this happened when it did. At home I am in the process of moving all my data from a Linux NFS server to OpenSolaris. It''s something I''d been meaning to do
2008 Mar 12
5
[Bug 752] New: zfs set keysource no longer works on existing pools
http://defect.opensolaris.org/bz/show_bug.cgi?id=752 Summary: zfs set keysource no longer works on existing pools Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: blocker Priority: P1 Component: other AssignedTo: