similar to: Clone a disk, need to change pool_guid

Displaying 20 results from an estimated 10000 matches similar to: "Clone a disk, need to change pool_guid"

2007 Apr 14
1
Move data from the zpool (root) to a zfs file system
Hi List, As a ZFS newbie, I foolishly copied my data set to the root zpool file system (a large iSCSI SAN array). Thus: # zpool create -f iscsi c4t19d0 c4t20d0 c4t21d0 c4t22d0 c4t23d0 c4t24d0 # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT iscsi 9.53T 64.5K 5.34T 0% ONLINE - # zfs set mountpoint=/mydisks/iscsi iscsi Then copied
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment with the backend storage being iSCSI-based, in part because of the possibilities for failover. In exploring things in our test environment, I have noticed that ''zpool import'' takes a fairly long time; about 35 to 45 seconds per pool. A pool import time this slow obviously has implications for how fast
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2009 Nov 20
1
Using local disk for cache on an iSCSI zvol...
I''m just wondering if anyone has tried this, and what the performance has been like. Scenario: I''ve got a bunch of v20z machines, with 2 disks. One has the OS on it, and the other is free. As these are disposable client machines, I''m not going to mirror the OS disk. I have a disk server with a striped mirror zpool, carved into a bunch of zvols, each exported via
2008 Feb 05
3
ZFS hang and boot hang when iSCSI device removed
We''re currently evaluating ZFS prior to (hopefully) rolling it out across our server room, and have managed to lock up a server after connecting to an iSCSI target, and then changing the IP address of the target. Basically we have two test Solaris servers running, and I followed the instructions on the post below to share a zpool on Server1 using the iSCSI Target, and then import that
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi, We are seeing more long delays in zpool import, say, 4~5 or even 25~30 minutes, especially when backup jobs are going on in the FC SAN the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array, some pools takes a few seconds, but minutes for some. the pattern seems random to me so far. It''s first noticed soon after being upgraded to Solaris 10 U6
2004 May 03
2
Build problems on Linux SuSE 9.1
Hi, did anybody succeed in building R on SuSE Linux 9.1? My compilation failed with the following error messages: make[4]: Entering directory `/home/lederer/Source/R-1.9.0/src/modules/X11' gcc -I. -I../../../src/include -I../../../src/include -I/usr/X11R6/include -I/us r/local/include -DHAVE_CONFIG_H -D__NO_MATH_INLINES -mieee-fp -fPIC -g -O2 -c d ataentry.c -o dataentry.lo In file
2008 Jan 31
3
I.O error: zpool metadata corrupted after powercut
Last 2 weeks we had 2 zpools corrupted. Pool was visible via zpool import, but could not be imported anymore. During import attempt we got I/O error, After a first powercut we lost our jumpstart/nfsroot zpool (another pool was still OK). Luckaly jumpstart data was backed up and easely restored, nfsroot Filesystems where not but those where just test machines. We thought the metadata corruption
2008 Feb 07
1
zpool destroy core dumps with unavailable iscsi device
While playing around with ZFS and iSCSI devices I''ve managed to remove an iscsi target before removing the zpool. Now any attempt to delete the pool (with or without -f) core dumps zpool. Any ideas how I get rid of this pool? This message posted from opensolaris.org
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2010 Jul 16
6
Lost zpool after reboot
Hello, I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows. Last 2 days I was working in Windows. I didn''t touch the hard drives in any way except I once opened Disk Management to figure out why a external USB hard drive is not being listed.
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all, I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2010 Jul 16
1
Making a zvol unavailable to iSCSI trips up ZFS
I''ve been experimenting with a two system setup in snv_134 where each system exports a zvol via COMSTAR iSCSI. One system imports both its own zvol and the one from the other system and puts them together in a ZFS mirror. I manually faulted the zvol on one system by physically removing some drives. What I expect to happen is that ZFS will fault the zvol pool and the iSCSI stack will
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2012 Feb 16
3
4k sector support in Solaris 11?
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11, will zpool let me create a new pool with ashift=12 out of the box or will I need to play around with a patched zpool binary (or the iSCSI loopback)? -- Dave Pooser Manager of Information Services Alford Media http://www.alfordmedia.com
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath abcxxxx # zpool import pool: emcpool1 id:
2008 Jan 10
2
Assistance needed expanding RAIDZ with larger drives
Hi all, Please can you help with my ZFS troubles: I currently have 3 x 400 GB Seagate NL35''s and a 500 GB Samsung Spinpoint in a RAIDZ array that I wish to expand by systematically replacing each drive with a 750 GB Western Digital Caviar. After failing miserably, I''d like to start from scratch again if possible. When I last tried, the replace command hung for an age, network
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file server in it for learning purposes, and I moved almost all of my data to it. Yesterday, and naturally after no longer having backups of the data in the server, I had a controller failure (SiS 180 (oh, the quality)) and the HDD was considered unplugged. When I noticed a few checksum failures on `zfs status` (including two on
2009 Aug 27
0
How are you supposed to remove faulted spares from pools?
We have a situation where all of the spares in a set of pools have gone into a faulted state and now, apparently, we can''t remove them or otherwise de-fault them. I''m confidant that the underlying disks are fine, but ZFS seems quite unwilling to do anything with the spares situation. (The specific faulted state is ''FAULTED corrupted data'' in ''zpool