Displaying 20 results from an estimated 5000 matches similar to: "Finding Pool ID"
2010 Jul 16
6
Lost zpool after reboot
Hello,
I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows.
Last 2 days I was working in Windows. I didn''t touch the hard drives in any way except I once opened Disk Management to figure out why a external USB hard drive is not being listed.
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I would like to know the blocksize of a particular file. I know the
blocksize for a particular file is decided at creation time, in fuction
of the write size done and the recordsize property of the dataset.
How can I access that information?. Some zdb magic?.
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at
2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives:
fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME UUID SUNW-MSG-ID
Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2007 Aug 14
2
restore lost pool after vtoc re-label
hi all,
i''ve been using a SAN LUN as the sole member of a zpool with one additional zfs filesystem. this is a flat SAN fabric, so this LUN was available to other systems on the fabric, and one of them came up with "wrong magic number" for several drives, and, as best i can tell, the vtoc for my zpool LUN was over-written on that host via format labeling to correct the error.
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object?
Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used?
Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2010 Mar 29
2
pool won''t import
root at cs6:~# zpool import
pool: content3
id: 14184872052409584084
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the ''-f'' flag.
see: http://www.sun.com/msg/ZFS-8000-72
config:
content3
2012 Dec 12
20
Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)
I''ve hit this bug on four of my Solaris 11 servers. Looking for anyone else
who has seen it, as well as comments/speculation on cause.
This bug is pretty bad. If you are lucky you can import the pool read-only
and migrate it elsewhere.
I''ve also tried setting zfs:zfs_recover=1,aok=1 with varying results.
http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc
2007 Sep 13
11
How do I get my pool back?
After having to replace an internal raid card in an X2200 (S10U3 in
this case), I can see the disks just fine - and can boot, so the data
isn''t completely missing.
However, my zpool has gone.
# zpool status -x
pool: storage
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the
2013 Jan 08
3
pool metadata has duplicate children
I seem to have managed to end up with a pool that is confused abut its children disks. The pool is faulted with corrupt metadata:
pool: d
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://illumos.org/msg/ZFS-8000-72
scan: none requested
config:
NAME STATE
2007 Dec 31
4
Help! ZFS pool is UNAVAILABLE
Hi All,
I posted this in a different threat, but it was recommended that I post in this one.
Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add''l USB drives with the intention of mirroring and then migrating the data to the new USB drives.
I accidentally added the 3 USB drives in a raidz to my original storage pool, so now I have 2
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file
server in it for learning purposes, and I moved almost all of my data
to it. Yesterday, and naturally after no longer having backups of the
data in the server, I had a controller failure (SiS 180 (oh, the
quality)) and the HDD was considered unplugged. When I noticed a few
checksum failures on `zfs status` (including two on
2008 Jan 18
7
how to relocate a disk
Hi,
I''d like to move a disk from one controller to another. This disk is
part of a mirror in a zfs pool. How can one do this without having to
export/import the pool or reboot the system?
I tried taking it offline and online again, but then zpool says the disk
is unavailable. Trying a zpool replace didn''t work because it complains
that the "new" disk is part of a
2010 Aug 30
5
pool died during scrub
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk
stripe. They''re all supermicro-based with retail LSI cards.
I''ve noticed a tendency for things to go a little bonkers during the
weekly scrub (they all scrub over the weekend), and that''s when I''ll
lose a disk here and there. OK, fine, that''s sort of the point, and
they''re
2010 Jun 02
11
ZFS recovery tools
Hi,
I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to
learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks
to some great forum posts from Victor Latushkin, however without his posts I would still be crying
at night...
I think the worst example is the zdb man page, which all it does is to ask you
2007 Sep 13
2
zpool versioning
Hi,
I was wondering if anyone would know if this is just an accounting-type
error with the recorded "version=" stored on disk, or if there
are/could-be any deeper issues with an "upgraded" zpool?
I created a pool under a Sol10_x86_u3 install (11/06?), and zdb correctly
reported the pool as a "version=3" pool. I reinstalled the OS with a u4
(08/07), ran zpool
2012 Dec 20
3
Pool performance when nearly full
Hi
I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking
(and I''d check the ZFS wikis but the websites are down at the moment).
Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ?
zfs list:
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%
zpool iostat:
used
2007 Sep 17
2
zpool create -f not applicable to hot spares
Hello zfs-discuss,
If you do ''zpool create -f test A B C spare D E'' and D or E contains
UFS filesystem then despite of -f zpool command will complain that
there is UFS file system on D.
workaround: create a test pool with -f on D and E, destroy it and
that create first pool with D and E as hotspares.
I''ve tested it on s10u3 + patches - can someone confirm
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool