Displaying 20 results from an estimated 6000 matches similar to: "Physical Clone of zpool"
2007 Jan 10
0
ZFS and HDS ShadowImage
Hi Derek,
Here''s the latest email I''ve received from the zfs-discuss alias.
------------- Begin Forwarded Message -------------
Date: Mon, 18 Sep 2006 23:55:27 -0400
From: Jonathan Edwards <Jonathan.Edwards@sun.com>
Subject: Re: [zfs-discuss] ZFS and HDS ShadowImage
To: Eric Schrock <eric.schrock@sun.com>
Cc: zfs-discuss@opensolaris.org, Torrey McMahon
2007 Jul 27
0
cloning disk with zpool
Hello the list,
I thought that it should be easy to do a clone (not in the term of zfs) of a disk with zpool. This manipulation is strongly inspired by
http://www.opensolaris.org/jive/thread.jspa?messageID=135038
and
http://www.opensolaris.org/os/community/zfs/boot/
But unfortunately this doesn''t work, and we do have no clue what could be wrong
on c1d0 you have a zfs root
create a
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment
with the backend storage being iSCSI-based, in part because of the
possibilities for failover. In exploring things in our test environment,
I have noticed that ''zpool import'' takes a fairly long time; about
35 to 45 seconds per pool. A pool import time this slow obviously
has implications for how fast
2007 Feb 13
1
Zpool complain about missing devices
Hello,
We had a situation at customer site where one of the zpool complains about missing devices. We do not know which devices are missing. Here are the details:
Customer had a zpool created on a hardware raid(SAN). There is no redundancy in the pool. Pool had 13 LUN''s, customer wanted to increase the size of and added 5 more Luns. During zpool add process system paniced with zfs
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi,
We are seeing more long delays in zpool import, say, 4~5 or even
25~30 minutes, especially when backup jobs are going on in the FC SAN
the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array,
some pools takes a few seconds, but minutes for some. the pattern
seems random to me so far. It''s first noticed soon after being upgraded to
Solaris 10 U6
2008 May 26
1
Clone a disk, need to change pool_guid
Hi folks,
I use an iSCSI disk mounted onto a Solaris 10 server. I installed a ZFS file system into s2 of the disk. I exported the disk and cloned it on the iSCSI target. The clone is a perfect copy of the iSCSI LUN and therefore has the same zpool name and guid.
My question is: is there any way to change the ZFS guid (and the zpool name, but that''s easy) on the clone so that I can
2007 Nov 27
0
zpool detech hangs causes other zpool commands, format, df etc.. to hang
Customer has a Thumper running:
SunOS x4501 5.10 Generic_120012-14 i86pc i386 i86pc
where running "zpool detech disk c6t7d0" to detech a mirror causes zpool
command to hang with following kernel stack trace:
PC: _resume_from_idle+0xf8 CMD: zpool detach disk1 c6t7d0
stack pointer for thread fffffe84d34b4920: fffffe8001c30c10
[ fffffe8001c30c10 _resume_from_idle+0xf8() ]
2006 May 21
0
[LLVMdev] new llvmgcc4 snapshop
On Sat, 2006-05-20 at 19:22 -0500, Chris Lattner wrote:
> This should build with mainline, includes some performance tweaks, better
> build support for mingw, detects a faulty --enable-llvm configure
> option, and has better support for the Darwin/X86 ABI:
Any functional changes for non-Darwin or am I ok continuing to use the
last snapshop?
Andrew
2009 Apr 08
0
zpool history coredump
Pawel,
another one (though minor, I suppose) bug report: while playing with my poor
pool, I tried to interact with it on -current, thus importing it with -f
(without upgrading, of course).
After reverting to RELENG_7, I found I no more can access history:
root@moose:~# /usr/obj/usr/src/cddl/sbin/zpool/zpool history
History for 'm':
2008-10-14.23:04:28 zpool create m raidz ad4h ad6h
2007 Nov 13
0
in a zpool consist of regular files, when I remove the file vdev, zpool status can not detect?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2011 Apr 01
15
Zpool resize
Hi,
LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m
changing LUN size on netapp and solaris format see new value but zpool
still have old value.
I tryed zpool export and zpool import but it didn''t resolve my problem.
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2008 Jan 22
0
zpool attach problem
On a V240 running s10u4 (no additional patches), I had a pool which looked like this:
<pre>
> # zpool status
> pool: pool01
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> pool01 ONLINE 0 0 0
> mirror
2009 Aug 21
0
bug :zpool create allow member driver as the raw drive of full partition
IF you run solaris and opensolaris ?for example you my use c0t0d0 (for scsi disk) or c0d0 (for ide /SATA disk ) as the system disk.
In default ,solaris x86 and opensolaris will use RAW driver :
c0t0d0s0 (/dev/rdsk/c0t0d0s0) as the member driver of rpool.
Infact, solaris2 partition can be more then one in each Hard Disk, so we also can use the RAW driver like : c0t0d0p1 (/dev/rdsk/c0t0d0p1)
2008 Mar 27
5
[Bug 871] New: ''zpool key -l'' core dumped with keysource=hex, prompt and unmatched entered in
http://defect.opensolaris.org/bz/show_bug.cgi?id=871
Summary: ''zpool key -l'' core dumped with keysource=hex,prompt and
unmatched entered in
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Windows
Status: NEW
Severity: minor
2006 Jun 19
0
snv_42 zfs/zpool dump core and kernel/fs/zfs won''t load.
I''m pretty sure this is my fault but I need some help in fixing the system.
It was installed at one point with snv_29 with the pre integration
SUNWzfs package. I did a live upgrade to snv_42 but forgot to remove
the old SUNWzfs before I did so. When the system booted up got
complaints about kstat install because I still had an old zpool kernel
module lying around.
So I did pkgrm
2013 May 24
0
zpool resource fails with incorrect error
I''m working to expand / develop on the zpool built-in type, but the zpool
command is failing and Puppet''s returned stderr is not what I get if I
copy/paste the command given by the debug output.
# cat /etc/puppet/manifests/zpool_raidz2.pp
zpool { ''tank'':
ensure => present,
raidz => [ ''d01 d02 d03 d04'', ''d05 d06
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys
I just do the test for use loop device as vdev for zpool
Procedures as followings:
1) mkfile -v 100m disk1
mkfile -v 100m disk2
2) lofiadm -a disk1 /dev/lofi
lofiadm -a disk2 /dev/lofi
3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2
4) zpool export pool_1and2
5) zpool import pool_1and2
error info here:
bash-3.00# zpool import pool1_1and2
cannot import
2008 Jul 08
0
Disks errors not shown by zpool?
Ok, this is not a OpenSolaris question, but it is a Solaris and ZFS
question.
I have a pool with three mirrored vdevs. I just got an error message
from FMD that read failed from one on the disks,(c1t6d0). All with
instructions on how to handle the problem and replace the devices, so
far everything is good. But the zpool still thinks everything is fine.
Shouldn''t zpool also show
2009 Feb 11
0
failmode= continue prevents zpool processes from hanging and being unkillable?
> Dear ZFS experts,
> somehow one of my zpools got corrupted. Symptom is that I cannot
> import it any more. To me it is of lesser interest why that happened.
> What is really challenging is the following.
>
> Any effort to import the zpool hangs and is unkillable. E.g. if I
> issue a "zpool import test2-app" the process hangs and cannot be
> killed. As this