similar to: Destroying old zpools

Displaying 20 results from an estimated 50000 matches similar to: "Destroying old zpools"

2009 Nov 17
1
upgrading to the latest zfs version
Hi guys, after reading the mailings yesterday i noticed someone was after upgrading to zfs v21 (deduplication) i''m after the same, i installed osol-dev-127 earlier which comes with v19 and then followed the instructions on http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to date, however, the system is reporting no updates are available and stays at zfs v19, any ideas?
2008 Feb 15
2
[storage-discuss] Preventing zpool imports on boot
On Thu, Feb 14, 2008 at 11:17 PM, Dave <dave-opensolaris at dubkat.com> wrote: > I don''t want Solaris to import any pools at bootup, even when there were > pools imported at shutdown/at crash time. The process to prevent > importing pools should be automatic and not require any human > intervention. I want to *always* import the pools manually. > > Hrm... what
2007 Oct 30
1
Different Sized Disks Recommendation
Hi, I was first attracted to ZFS (and therefore OpenSolaris) because I thought that ZFS allowed the used of different sized disks in raidz pools without wasted disk space. Further research has confirmed that this isn''t possible--by default. I have seen a little bit of documentation around using ZFS with slices. I think this might be the answer, but I would like to be sure what the
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334 Summary: zpool destroy panics after zfs_force_umount_stress Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other AssignedTo:
2008 Apr 02
1
delete old zpool config?
Hi experts zpool import shows some weird config of an old zpool bash-3.00# zpool import pool: data1 id: 7539031628606861598 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: data1 UNAVAIL insufficient replicas
2009 Jan 23
2
zpool import fails to find pool
Hi all, I moved from Sol 10 Update4 to update 6. Before doing this I exported both of my zpools, and replace the discs containing the ufs root on with two new discs (these discs did not have any zpool /zfs info and are raid mirrored in hardware) Once I had installed update6 I did a zpool import, but it only shows (and was able to) import one of the two pools. Looking at dmesg it appears as
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi, We are seeing more long delays in zpool import, say, 4~5 or even 25~30 minutes, especially when backup jobs are going on in the FC SAN the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array, some pools takes a few seconds, but minutes for some. the pattern seems random to me so far. It''s first noticed soon after being upgraded to Solaris 10 U6
2008 Jan 31
3
I.O error: zpool metadata corrupted after powercut
Last 2 weeks we had 2 zpools corrupted. Pool was visible via zpool import, but could not be imported anymore. During import attempt we got I/O error, After a first powercut we lost our jumpstart/nfsroot zpool (another pool was still OK). Luckaly jumpstart data was backed up and easely restored, nfsroot Filesystems where not but those where just test machines. We thought the metadata corruption
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all, I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2009 Jul 20
1
zpool import problem / missing label / corrupted data
After a power outage due to a thunder storm my 3 disk raidz1 pool has become UNAVAILable. It is a ZFV v13 pool using the whole 3 disks created on FreeBSD current 8 x64 and worked well for over a month. Unfortunately I wasn''t able to import the pool with neither a FreeBSD LiveCD or the current OpenSolaris LiveCD x86/x64. When I tried to import the pool with FreeBSD the system just hangs (I
2007 Jan 10
0
ZFS and HDS ShadowImage
Hi Derek, Here''s the latest email I''ve received from the zfs-discuss alias. ------------- Begin Forwarded Message ------------- Date: Mon, 18 Sep 2006 23:55:27 -0400 From: Jonathan Edwards <Jonathan.Edwards@sun.com> Subject: Re: [zfs-discuss] ZFS and HDS ShadowImage To: Eric Schrock <eric.schrock@sun.com> Cc: zfs-discuss@opensolaris.org, Torrey McMahon
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.: zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0 I have not been able to find any discussion on whether (or when) to
2009 Feb 12
2
Solaris and zfs versions
We''ve been experimenting with zfs on OpenSolaris 2008.11. We created a pool in OpenSolaris and filled it with data. Then we wanted to move it to a production Solaris 10 machine (generic_137138_09) so I "zpool exported" in OpenSolaris, moved the storage, and "zpool imported" in Solaris 10. We got: Cannot import ''deadpool'': pool is formatted
2009 Nov 02
0
Kernel panic on zfs import (hardware failure)
Hey, On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote: > Donald Murray, P.Eng. wrote: >> >> Hi, >> >> I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever >> I try to import one of my pools. What''s the best practice for >> recovering (before I resort to nuking the pool and
2010 Apr 23
12
Re-attaching zpools after machine termination [amazon ebs & ec2]
I''m trying to provide some "disaster-proofing" on Amazon EC2 by using a ZFS-based EBS volume for primary data storage with Amazon S3-backed snapshots. My aim is to ensure that, should the instance terminate, a new instance can spin-up, attach the EBS volume and auto-/re-configure the zpool. I''ve created an OpenSolaris 2009.06 x86_64 image with the zpool structure
2010 Mar 19
0
zpool import problem
Hello All, I have some problem with the import of pools. On the source system the pools are configured with emcpower devices on slice 2 (emcpower1c) zpool create mypool emcpower1c When i try to do an import on another hosts with mpxio enabled i get this result: pool: ora_system.2 id: 9755850482304172097 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool
2009 Nov 02
2
How do I protect my zfs pools?
Hi, I may have lost my first zpool, due to ... well, we''re not yet sure. The ''zpool import tank'' causes a panic -- one which I''m not even able to capture via savecore. I''m glad this happened when it did. At home I am in the process of moving all my data from a Linux NFS server to OpenSolaris. It''s something I''d been meaning to do
2008 Mar 12
5
[Bug 752] New: zfs set keysource no longer works on existing pools
http://defect.opensolaris.org/bz/show_bug.cgi?id=752 Summary: zfs set keysource no longer works on existing pools Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: blocker Priority: P1 Component: other AssignedTo:
2007 Feb 13
1
Zpool complain about missing devices
Hello, We had a situation at customer site where one of the zpool complains about missing devices. We do not know which devices are missing. Here are the details: Customer had a zpool created on a hardware raid(SAN). There is no redundancy in the pool. Pool had 13 LUN''s, customer wanted to increase the size of and added 5 more Luns. During zpool add process system paniced with zfs