Displaying 20 results from an estimated 50000 matches similar to: "Status on shrinking zpool"
2008 Jan 28
4
? Removing a disk from a ZFS Storage Pool
Hi
my understanding is that you cannot remove a disk from a ZFS storage
pool once you have added it...but I also think I saw an email from Jeff
B saying that the ability to depopulate a disk so that it can be
removed is being worked on....or was I dreaming ?
What is the status of this ?
Thanks
Tim
--
Signature
Tim Thomas
Staff Engineer
Storage
Systems Product Group
2007 Jul 03
1
zpool status -v: machine readable format?
I was wondering if anyone had a script to parse the "zpool status -v" output into a more machine readable format?
Thanks,
David
This message posted from opensolaris.org
2007 Nov 13
3
zpool status can not detect the vdev removed?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2007 Sep 26
3
zpool status (advanced listing)?
Under the GUI, there is an "advanced" option which shows vdev capacity, etc. I''m drawing a blank about how to get with the commands...
Thanks,
David
This message posted from opensolaris.org
2010 Dec 28
2
zpool status keeps telling "resilvered"
Hi!
We have a raidz2 pool with 1 spare. Recently, one of the drives generated a lot of checksum errors, so it was automatically replaced with the spare. Since the errors stopped at some point, we figured that the drive itself was not at fault. We offlined it, zeroed it and onlined it again, started resilvering, and manually detached the spare drive. The zpool status is ONLINE and mentions
2008 Feb 25
3
[Bug 631] New: zpool get with no pool name dumps core in zfs-crypto
http://defect.opensolaris.org/bz/show_bug.cgi?id=631
Summary: zpool get with no pool name dumps core in zfs-crypto
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: minor
Priority: P4
Component: other
AssignedTo:
2008 Dec 18
3
automatic forced zpool import with unmatched hostid
Hi,
since hostid is stored in the label, "zpool import" failed if the hostid dind''t match. Under certain circonstances (ldom failover) it means you have to manually force the zpool import while booting. With more than 80 LDOMs on a single host it will be great if we could configure the machine back to the old behavior where it didn''t failed, maybe with a /etc/sytem
2008 Mar 13
3
[Bug 759] New: ''zpool create -o keysource=,'' hanged
http://defect.opensolaris.org/bz/show_bug.cgi?id=759
Summary: ''zpool create -o keysource=,'' hanged
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: i86pc/i386
OS/Version: Solaris
Status: NEW
Severity: minor
Priority: P3
Component: other
2008 Mar 27
5
[Bug 871] New: ''zpool key -l'' core dumped with keysource=hex, prompt and unmatched entered in
http://defect.opensolaris.org/bz/show_bug.cgi?id=871
Summary: ''zpool key -l'' core dumped with keysource=hex,prompt and
unmatched entered in
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Windows
Status: NEW
Severity: minor
2008 Mar 10
2
[Bug 701] New: ''zpool create -o keysource='' fails on sparc - invalid argument
http://defect.opensolaris.org/bz/show_bug.cgi?id=701
Summary: ''zpool create -o keysource='' fails on sparc - invalid
argument
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: SPARC/sun4u
OS/Version: Solaris
Status: NEW
Severity: minor
Priority:
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334
Summary: zpool destroy panics after zfs_force_umount_stress
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
AssignedTo:
2008 May 22
1
[Bug 2017] New: zpool key -l fails on "first'' load.
http://defect.opensolaris.org/bz/show_bug.cgi?id=2017
Summary: zpool key -l fails on "first'' load.
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Other
Status: NEW
Severity: minor
Priority: P4
Component: other
AssignedTo: darrenm
2010 Apr 21
2
HELP! zpool corrupted data
Hello,
Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd:
FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64
mfsbsd# zpool import
pool: tank
id: 1998957762692994918
state: FAULTED
2011 Jul 26
2
recover zpool with a new installation
Hi all,
I lost my storage because rpool don''t boot. I try to recover, but
opensolaris says to "destroy and re-create".
My rpool installed on flash drive, and my pool (with my info) it''s on
another disks.
My question is: It''s possible I reinstall opensolaris in new flash drive,
without stirring on my pool of disks, and recover this pool?
Thanks.
Regards,
--
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2006 Jan 30
4
Adding a mirror to an existing single disk zpool
Hello All,
I''m transitioning data off my old UFS partitions onto ZFS. I don''t have a lot of duplicate space so I created a zpool, rsync''ed the data from UFS to the ZFS mount and then repartitioned the UFS drive to have partitions that match the cylinder count of the ZFS. The idea here is that once the data is over I wipe out UFS and then attach that partition to the
2007 Jul 25
3
Any fix for zpool import kernel panic (reboot loop)?
My system (a laptop with ZFS root and boot, SNV 64A) on which I was trying Opensolaris now has the zpool-related kernel panic reboot loop.
Booting into failsafe mode or another solaris installation and attempting:
''zpool import -F rootpool'' results in a kernel panic and reboot.
A search shows this type of kernel panic has been discussed on this forum over the last year.
2007 Jun 21
9
Undo/reverse zpool create
Hi,
If I add an entire disk to a new pool by doing "zpool create", is this reversible?
I.e. if there was data on that disk (e.g. it was the sole disk in a zpool in another system) can I get this back or is zpool create destructive?
Joubert
This message posted from opensolaris.org
2010 Jan 07
4
link in zpool upgrade -v broken
http://www.opensolaris.org/os/community/zfs/version/
No longer exists. Is there a bug for this yet?
--
Ian.
2010 May 20
2
reconstruct recovery of rpool zpool and zfs file system with bad sectors
Folks I posted this question on (OpenSolaris - Help) without any replies http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am re-posting here in the hope someone can help ... I have updated the wording a little too (in an attempt to clarify)
I currently use OpenSolaris on a Toshiba M10 laptop.
One morning the system wouldn''t boot OpenSolaris 2009.06 (it was simply