Displaying 20 results from an estimated 10000 matches similar to: "Faulted Pool Question"
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi,
We are seeing more long delays in zpool import, say, 4~5 or even
25~30 minutes, especially when backup jobs are going on in the FC SAN
the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array,
some pools takes a few seconds, but minutes for some. the pattern
seems random to me so far. It''s first noticed soon after being upgraded to
Solaris 10 U6
2006 Jun 21
2
ZFS and Virtualization
Hi experts,
I have few issues about ZFS and virtualization:
[b]Virtualization and performance[/b]
When filesystem traffic occurs on a zpool containing only spindles dedicated to this zpool i/o can be distributed evenly. When the zpool is located on a lun sliced from a raid group shared by multiple systems the capability of doing i/o from this zpool will be limited. Avoiding or limiting i/o to
2008 Jul 25
18
zfs, raidz, spare and jbod
Hi.
I installed solaris express developer edition (b79) on a supermicro
quad-core harpertown E5405 with 8 GB ram and two internal sata-drives.
I installed solaris onto one of the internal drives. I added an areca
arc-1680 sas-controller and configured it in jbod-mode. I attached an
external sas-cabinet with 16 sas-drives 1 TB (931 binary GB). I
created a raidz2-pool with ten disks and one spare.
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact
with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun
badged Seagate ST31000N in a J4400) ? I have a resilver running and am
seeing about 700-800 writes/sec. on the hot spare as it resilvers.
There is no other I/O activity on this box, as this is a remote
replication target for production data. I have a the
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2009 Oct 09
22
Does ZFS work with SAN-attached devices?
Hi All,
Its been a while since I touched zfs. Is the below still the case with zfs and hardware raid array? Do we still need to provide two luns from the hardware raid then zfs mirror those two luns?
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
Thanks,
Shawn
--
This message posted from opensolaris.org
2007 Feb 13
1
Zpool complain about missing devices
Hello,
We had a situation at customer site where one of the zpool complains about missing devices. We do not know which devices are missing. Here are the details:
Customer had a zpool created on a hardware raid(SAN). There is no redundancy in the pool. Pool had 13 LUN''s, customer wanted to increase the size of and added 5 more Luns. During zpool add process system paniced with zfs
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi,
once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones.
Thanks,
budy
--
This message posted from opensolaris.org
2007 Jul 04
3
zfs dynamic lun expansion
Hi,
I had 2 luns in a zfs mirrored config.
I increased the size of both the luns by x gig and offlined/online the individual luns in the zpool, also tried an export/import of the zpool, but i am unable to see the increased size....what would i need to do to see the increased size?...or is it not possible yet?
This message posted from opensolaris.org
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2009 Mar 11
9
ZFS on a SAN
Hi All,
I''m new on ZFS, so I hope this isn''t too basic a question. I have a host where I setup ZFS. The Oracle DBAs did their thing and I know have a number of ZFS datasets with their respective clones and snapshots on serverA. I want to export some of the clones to serverB. Do I need to zone serverB to see the same LUNs as serverA? Or does it have to have preexisting,
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start:
Status immediately after starting resilver:
# zpool status
pool: rc-pool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine
2011 Oct 24
1
ZFS in front of MD3000i
We''re setting up ZFS in front of an MD3000i (and attached MD1000
expansion trays).
The rule of thumb is to let ZFS manage all of the disks, so we wanted
to expose each MD3000i spindle via a JBOD mode of some sort.
Unfortunately, it doesn''t look like the MD3000i this (though this[1]
post seems to reference an Enhanced JBOD mode....), so we decided to
create a whole bunch of
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What
would you do next to try and recover this zfs pool?
I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was
composed of 4 1.5 TiB disks. One disk is totally dead. Another had
SMART errors, but using GNU ddrescue I was able to copy all the data
off successfully.
I have copied all 3 remaining disks as images using
2009 Aug 27
0
How are you supposed to remove faulted spares from pools?
We have a situation where all of the spares in a set of pools have
gone into a faulted state and now, apparently, we can''t remove them
or otherwise de-fault them. I''m confidant that the underlying disks
are fine, but ZFS seems quite unwilling to do anything with the spares
situation.
(The specific faulted state is ''FAULTED corrupted data'' in
''zpool
2008 Nov 25
2
Can a zpool cachefile be copied between systems?
Suppose that you have a SAN environment with a lot of LUNs. In the
normal course of events this means that ''zpool import'' is very slow,
because it has to probe all of the LUNs all of the time.
In S10U6, the theoretical ''obvious'' way to get around this for your
SAN filesystems seems to be to use a non-default cachefile (likely one
cachefile per virtual
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment
with the backend storage being iSCSI-based, in part because of the
possibilities for failover. In exploring things in our test environment,
I have noticed that ''zpool import'' takes a fairly long time; about
35 to 45 seconds per pool. A pool import time this slow obviously
has implications for how fast
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS:
ZFS filesystem version 4
ZFS storage pool version 15
Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error
detached,when I copy a big file...
and after reboot in 2 wd green 1tb say me goodbye. One of them die and other
with zfs errors:
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=187921768448 size=512 error=6
2008 Jan 31
7
mounting a copy of a zfs pool /file system while orginal is still active
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the snapshots and import the pool while the original is still active. Right now all I am able to do is export
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas