similar to: ZFS Replication Question

Displaying 20 results from an estimated 30000 matches similar to: "ZFS Replication Question"

2008 Jan 31
7
mounting a copy of a zfs pool /file system while orginal is still active
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the snapshots and import the pool while the original is still active. Right now all I am able to do is export
2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
Hi, I''m struggling to get a stable ZFS replication using Solaris 10 110/06 (actual patches) and AVS 4.0 for several weeks now. We tried it on VMware first and ended up in kernel panics en masse (yes, we read Jim Dunham''s blog articles :-). Now we try on the real thing, two X4500 servers. Well, I have no trouble replicating our kernel panics there, too ... but I think I
2007 Jun 13
5
drive displayed multiple times
So I just imported an old zpool onto this new system. The problem would be one drive (c4d0) is showing up twice. First it''s displayed as ONLINE, then it''s displayed as "UNAVAIL". This is obviously causing a problem as the zpool now thinks it''s in a degraded state, even though all drives are there, and all are online. This pool should have 7 drives total,
2009 Jun 01
7
Does zpool clear delete corrupted files
Hi list, First off: # cat /etc/release Solaris 10 6/06 s10x_u2wos_09a X86 Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 09 June 2006 Here''s an (almost) disaster scenario that came to life over the past week. We have a very large zpool
2006 Jun 22
1
zfs snapshot restarts scrubbing?
Hi, yesterday I implemented a simple hourly snapshot on my filesystems. I also regularly initiate a manual "zpool scrub" on all my pools. Usually the scrubbing will run for about 3 hours. But after enabling hourly snapshots I noticed that zfs scrub is always restarted if a new snapshot is created - so basically it will never have the chance to finish: # zpool scrub scratch # zpool
2007 Jan 10
2
using veritas dmp with ZFS (but not vxvm)
We have some HDS storage that isn''t supported by mpxio, so we have to use veritas dmp to get multipathing. Whats the recommended way to use DMP storage with ZFS. I want to use DMP but get at the multipathed virtual luns at as low a level as possible to avoid using vxvm as much as possible. I figure theres no point in having overhead from 2 volume manages if we can avoid it. Has anyone
2006 Dec 12
3
ZFS Corruption
Please reply directly to me. Seeing the message below. Is it possible to determine exactly which file is corrupted? I was thinking the OBJECT/RANGE info may be pointing to it but I don''t know how to equate that to a file. # zpool status -v pool: u01 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS: ZFS filesystem version 4 ZFS storage pool version 15 Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error detached,when I copy a big file... and after reboot in 2 wd green 1tb say me goodbye. One of them die and other with zfs errors: Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=187921768448 size=512 error=6
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2007 Mar 16
8
ZFS checksum error detection
Hi all. A quick question about the checksum error detection routines in ZFS. Surely ZFS can decide about checksum errors in a redundant environment but what about an non-redundant one? We connected a single RAID5 array to a v440 as a NFS server and while doing backups and the like we see the "zpool status -v" checksum error counters increment once in a while. Nevertheless the
2007 Jun 09
2
zfs bug
dd if=/dev/zero of=sl1 bs=512 count=256000 dd if=/dev/zero of=sl2 bs=512 count=256000 dd if=/dev/zero of=sl3 bs=512 count=256000 dd if=/dev/zero of=sl4 bs=512 count=256000 zpool create -m /export/test1 test1 raidz /export/sl1 /export/sl2 /export/sl3 zpool add -f test1 /export/sl4 dd if=/dev/zero of=sl4 bs=512 count=256000 zpool scrub test1 panic. and message like on image. This message posted
2007 Sep 25
23
device alias
Hi. I''d like to request a feature be added to zfs. Currently, on SAN attached disk, zpool shows up with a big WWN for the disk. If ZFS (or the zpool command, in particular) had a text field for arbitrary information, it would be possible to add something that would indicate what LUN on what array the disk in question might be. This would make troubleshooting and general
2008 Apr 11
1
zfs concatenation to mirror
Hi, Is it possible to convert a zfs pool from a concatenation of 2 disks to a 2 way mirror without backing up the data, re-creating the pool and restoring it. i.e diskA,diskB ----> mirror(diskA,diskB) Thanks, Jeff
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2) zpool name is rpool. The I have a 2nd hard drive in the box that I am trying to recover the ZFS data from (long story but that HD became unbootable after installing IPS on the machine) Both drives have a pool named "rpool", so I can''t import the rpool from the 2nd drive. root at hyperion:~# zpool status
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy). My question is what''s the best approach to moving the ZFS
2007 Nov 09
3
Major problem with a new ZFS setup
We recently installed a 24 disk SATA array with an LSI controller attached to a box running Solaris X86 10 Release 4. The drives were set up in one big pool with raidz, and it worked great for about a month. On the 4th, we had the system kernel panic and crash, and it''s now behaving very badly. Here''s what diagnostic data I''ve been able to collect so far: In the
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2007 May 23
13
Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.
Hi. I''m all set for doing performance comparsion between Solaris/ZFS and FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I think I''m ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the links to disks are the bottleneck, so I''m going to use not more than 4 disks, probably.
2010 Sep 24
3
Kernel panic on ZFS import - how do I recover?
I posted this on the www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here. I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached.
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember