similar to: Can a zpool cachefile be copied between systems?

Displaying 20 results from an estimated 5000 matches similar to: "Can a zpool cachefile be copied between systems?"

2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi, We are seeing more long delays in zpool import, say, 4~5 or even 25~30 minutes, especially when backup jobs are going on in the FC SAN the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array, some pools takes a few seconds, but minutes for some. the pattern seems random to me so far. It''s first noticed soon after being upgraded to Solaris 10 U6
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment with the backend storage being iSCSI-based, in part because of the possibilities for failover. In exploring things in our test environment, I have noticed that ''zpool import'' takes a fairly long time; about 35 to 45 seconds per pool. A pool import time this slow obviously has implications for how fast
2008 Jan 31
7
mounting a copy of a zfs pool /file system while orginal is still active
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the snapshots and import the pool while the original is still active. Right now all I am able to do is export
2008 Feb 15
2
[storage-discuss] Preventing zpool imports on boot
On Thu, Feb 14, 2008 at 11:17 PM, Dave <dave-opensolaris at dubkat.com> wrote: > I don''t want Solaris to import any pools at bootup, even when there were > pools imported at shutdown/at crash time. The process to prevent > importing pools should be automatic and not require any human > intervention. I want to *always* import the pools manually. > > Hrm... what
2009 Feb 12
1
strange ''too many errors'' msg
Hi, just found on a X4500 with S10u6: fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-GH, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Wed Feb 11 16:03:26 CET 2009 PLATFORM: Sun Fire X4500, CSN: 00:14:4F:20:E0:2C , HOSTNAME: peng SOURCE: zfs-diagnosis, REV: 1.0 EVENT-ID: 74e6f0ec-b1e7-e49b-8d71-dc1c9b68ad2b DESC: The number of checksum errors associated with a ZFS device exceeded
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron. Zpool scrub runs fine from the command line, no errors. The freeze happens within 30 seconds of the zpool scrub happening. The one core dump I succeeded in taking showed an arccache eating up all the ram. The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s been patched and seems to have
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2) zpool name is rpool. The I have a 2nd hard drive in the box that I am trying to recover the ZFS data from (long story but that HD became unbootable after installing IPS on the machine) Both drives have a pool named "rpool", so I can''t import the rpool from the 2nd drive. root at hyperion:~# zpool status
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi, I just found out that ZFS triggers a kernel-panic while switching a mounted volume into read-only mode: The system is attached to a Symmetrix, all zfs-io goes through Powerpath: I ran some io-intensive stuff on /tank/foo and switched the device into read-only mode at the same time (symrdf -g bar failover -establish). ZFS went ''bam'' and triggered a Panic: WARNING: /pci at
2008 Aug 22
2
zpool autoexpand property - HowTo question
I noted this PSARC thread with interest: Re: zpool autoexpand property [PSARC/2008/353 Self Review] because it so happens that during a recent disk upgrade, on a laptop. I''ve migrated a zpool off of one partition onto a slightly larger one, and I''d like to somehow tell zfs to grow the zpool to fill the new partition. So, what''s the best way to do this? (and is it
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2007 Feb 13
1
Zpool complain about missing devices
Hello, We had a situation at customer site where one of the zpool complains about missing devices. We do not know which devices are missing. Here are the details: Customer had a zpool created on a hardware raid(SAN). There is no redundancy in the pool. Pool had 13 LUN''s, customer wanted to increase the size of and added 5 more Luns. During zpool add process system paniced with zfs
2010 Jun 02
11
ZFS recovery tools
Hi, I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks to some great forum posts from Victor Latushkin, however without his posts I would still be crying at night... I think the worst example is the zdb man page, which all it does is to ask you
2010 Nov 12
11
how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Hi, How I can I quiesce / freeze all writes to zfs and zpool if want to take hardware level snapshots or array snapshot of all devices under a pool ? are there any commands or ioctls or apis available ? Thanks & Regards, sridhar. -- This message posted from opensolaris.org
2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
Hi, I''m struggling to get a stable ZFS replication using Solaris 10 110/06 (actual patches) and AVS 4.0 for several weeks now. We tried it on VMware first and ended up in kernel panics en masse (yes, we read Jim Dunham''s blog articles :-). Now we try on the real thing, two X4500 servers. Well, I have no trouble replicating our kernel panics there, too ... but I think I
2006 Nov 28
7
Convert Zpool RAID Types
Hello, Is it possible to non-destructively change RAID types in zpool while the data remains on-line? -J
2011 May 03
4
multipl disk failures cause zpool hang
Hi, There seems to be a few threads about zpool hang, do we have a workaround to resolve the hang issue without rebooting ? In my case, I have a pool with disks from external LUNs via a fiber cable. When the cable is unplugged while there is IO in the pool, All zpool related command hang (zpool status, zpool list, etc.), put the cable back does not solve the problem. Eventually, I
2008 Aug 04
1
S10u6, zfs and zones
My server runs S10u5. All slices are UFS. I run a couple of sparse zones on a seperate slice mounted on /zones. When S10u6 comes out booting of ZFS will become possible. That is great news. However, will it be possible to have those zones I run now too? I always understood ZFS and root zones are difficult. I hope to be able to change all FS to ZFS, including the space for the sparse zones. Does
2010 Jan 24
4
zfs streams
Can I send a zfs send stream (ZFS pool version 22 ; ZFS filesystem version 4) to a zfs receive stream on Solaris 10 (ZFS pool version 15 ; ZFS filesystem version 4)? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.03 b131 + All that''s really worth doing is what we do for others (Lewis Carrol)
2010 Nov 23
1
drive replaced from spare
I have a x4540 with a single pool made from a bunch of raidz1''s with 2 spares (solaris 10 u7). Been running great for over a year, but I''ve had my first event. A day ago the system activated one of the spares c4t7d0, but given the status below, I''m not sure what to do next. # zpool status pool: pool1 state: ONLINE scrub: resilver completed after 2h25m
2007 Sep 11
7
compression=on and zpool attach
I''ve got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool. Noticed during some performance testing today that its i/o bound but using hardly any CPU, so I thought turning on compression would be a quick win. I know I''ll have to copy files for existing data to be compressed, so I was going to make a new filesystem, enable compression and rysnc everything in,