Displaying 20 results from an estimated 4000 matches similar to: "Unable to import zpool since system hang during zfs destroy"
2010 May 07
0
confused about zpool import -f and export
Hi, all,
I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up.
I do a successful install, then I boot OK,
2011 Jan 18
4
Zpool Import Hanging
Hi All,
I believe this has been asked before, but I wasn?t able to find too much
information about the subject. Long story short, I was moving data around on
a storage zpool of mine and a zfs destroy <filesystem> hung (or so I
thought). This pool had dedup turned on at times while imported as well;
it?s running on a Nexenta Core 3.0.1 box (snv_134f).
The first time the machine was
2010 Sep 30
3
Cannot destroy snapshots: dataset does not exist
Hello,
I have a ZFS filesystem (zpool version 26 on Nexenta CP 3.01) which I''d like to rollback but it''s having an existential crisis.
Here''s what I see:
root at bambi:/# zfs rollback bambi/faline/userdocs at AutoD-2010-09-28
cannot rollback to ''bambi/faline/userdocs at AutoD-2010-09-28'': more recent snapshots exist
use ''-r'' to
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process:
hydra# zpool import
pool: tank
id:
2009 Aug 21
0
bug :zpool create allow member driver as the raw drive of full partition
IF you run solaris and opensolaris ?for example you my use c0t0d0 (for scsi disk) or c0d0 (for ide /SATA disk ) as the system disk.
In default ,solaris x86 and opensolaris will use RAW driver :
c0t0d0s0 (/dev/rdsk/c0t0d0s0) as the member driver of rpool.
Infact, solaris2 partition can be more then one in each Hard Disk, so we also can use the RAW driver like : c0t0d0p1 (/dev/rdsk/c0t0d0p1)
2008 Feb 07
1
zpool destroy core dumps with unavailable iscsi device
While playing around with ZFS and iSCSI devices I''ve managed to remove an iscsi target before removing the zpool. Now any attempt to delete the pool (with or without -f) core dumps zpool.
Any ideas how I get rid of this pool?
This message posted from opensolaris.org
2006 Jul 19
1
Q: T2000: raidctl vs. zpool status
Hi all,
IHACWHAC (I have a colleague who has a customer - hello, if you''re
listening :-) who''s trying to build and test a scenario where he can
salvage the data off the (internal ?) disks of a T2000 in case the sysboard
and with it the on-board raid controller dies.
If I understood correctly, he replaces the motherboard, does some magic to
get the raid config back, but even
2010 Aug 18
1
Kernel panic on import / interrupted zfs destroy
I have a box running snv_134 that had a little boo-boo.
The problem first started a couple of weeks ago with some corruption on two filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No biggie.
I thought that my problems had something to do with de-duplication in 134, so I went about the process of
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.:
zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0
I have not been able to find any discussion on whether (or when) to
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334
Summary: zpool destroy panics after zfs_force_umount_stress
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
AssignedTo:
2008 Jan 02
1
Adding to zpool: would failure of one device destroy all data?
I didn''t find any clear answer in the documentation, so here it goes:
I''ve got a 4-device RAIDZ array in a pool. I then add another RAIDZ array to the pool. If one of the arrays fails, would all the data on the array be lost, or would it be like disc spanning, and only the data on the failed array be lost?
Thanks in advance.
This message posted from opensolaris.org
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi,
I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26.
Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic.
check the panic @
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state
after a drive in a raidz2 vdev has been successfully replaced. In this
particular case drive c0t6d0 was failing so I ran,
zpool offline home/c0t6d0
zpool replace home c0t6d0 c8t1d0
and after the resilvering finished the pool reports a degraded state.
Hopefully this is incorrect. At this point is the vdev in question
now has
2010 Sep 24
3
Kernel panic on ZFS import - how do I recover?
I posted this on the www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here.
I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached.
2010 Jul 25
4
zpool destroy causes panic
I'm trying to destroy a zfs array which I recently created. It contains
nothing of value.
# zpool status
pool: storage
state: ONLINE
status: One or more devices could not be used because the label is
missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
2008 Jan 10
2
Assistance needed expanding RAIDZ with larger drives
Hi all,
Please can you help with my ZFS troubles:
I currently have 3 x 400 GB Seagate NL35''s and a 500 GB Samsung Spinpoint in a RAIDZ array that I wish to expand by systematically replacing each drive with a 750 GB Western Digital Caviar.
After failing miserably, I''d like to start from scratch again if possible. When I last tried, the replace command hung for an age, network
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can
see, he has mostly raidz zvols but has one raidz2 in the same zpool.
What are the implications here? Is this a bad thing to do? Please
elaborate.
Thanks,
Scott Gaspard
Scott.J.Gaspard at Sun.COM
> NAME STATE READ WRITE CKSUM
>
> chipool1 ONLINE 0 0 0
>
>
2010 Jul 06
3
Help with Faulted Zpool Call for Help(Cross post)
Hello list,
I posted this a few days ago on opensolaris-discuss@ list
I am posting here, because there my be too much noise on other lists
I have been without this zfs set for a week now.
My main concern at this point,is it even possible to recover this zpool.
How does the metadata work? what tool could is use to rebuild the
corrupted parts
or even find out what parts are corrupted.
most but
2010 Mar 11
1
zpool iostat / how to tell if your iop bound
What is the best way to tell if your bound by the number of individual
operations per second / random io? "zpool iostat" has an "operations" column
but this doesn''t really tell me if my disks are saturated. Traditional
"iostat" doesn''t seem to be the greatest place to look when utilizing zfs.
Thanks,
Chris
-------------- next part --------------
An
2010 Jul 12
7
How do I clean up corrupted files from zpool status -v?
Hi Folks..
I have a system that was inadvertently left unmirrored for root. We were able
to add a mirror disk, resilver, and fix the corrupted files (nothing very
interesting was corrupt, whew), but zpool status -v still shows errors..
Will this self correct when we replace the degraded disk and resilver? Or is
there something else that I''m not finding that I need to do to clean up?