similar to: ZFS Panicing System Cluster Crash effect

Displaying 20 results from an estimated 5000 matches similar to: "ZFS Panicing System Cluster Crash effect"

2007 Sep 17
1
Strange behavior zfs and soalris cluster
Hi All, Two and three-node clusters with SC3.2 and S10u3 (120011-14). If a node is rebooted when using SCSI3-PGR the node is not able to take the zpool by HAStoragePlus due to reservation conflict. SCSI2-PGRE is okay. Using the same SAN-LUN:s in a metaset (SVM) and HAStoragePlus works okay with PGR and PGRE. (both SMI and EFI-labled disks) If using scshutdown and restart all nodes then it will
2007 Dec 17
1
HA-NFS AND HA-ZFS
We are currently running sun cluster 3.2 on solaris 10u3. We are using ufs/vxvm 4.1 as our shared file systems. However, I would like to migrate to HA-NFS on ZFS. Since there is no conversion process from UFS to ZFS other than copy, I would like to migrate on my own time. To do this I am planning to add a new zpool HAStoragePlus resource to my existing HA-NFS resource group. This way I can migrate
2007 Dec 21
1
Odd behavior of NFS of ZFS versus UFS
I have a test cluster running HA-NFS that shares both ufs and zfs based file systems. However, the behavior that I am seeing is a little perplexing. The Setup: I have Sun Cluster 3.2 on a pair of SunBlade 1000''s connecting to two T3B partner groups through a QLogic switch. All four bricks of the T3B are configured as RAID-5 with a hot spare. One brick from each pair is mirrored with VxVM
2009 Oct 09
22
Does ZFS work with SAN-attached devices?
Hi All, Its been a while since I touched zfs. Is the below still the case with zfs and hardware raid array? Do we still need to provide two luns from the hardware raid then zfs mirror those two luns? http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid Thanks, Shawn -- This message posted from opensolaris.org
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello, I''m working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported: scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go and a week being 168 hours, that put completion at sometime tomorrow night. However, he just reported zpool status shows:
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2006 Jun 13
4
ZFS panic while mounting lofi device?
I believe ZFS is causing a panic whenever I attempt to mount an iso image (SXCR build 39) that happens to reside on a ZFS file system. The problem is 100% reproducible. I''m quite new to OpenSolaris, so I may be incorrect in saying it''s ZFS'' fault. Also, let me know if you need any additional information or debug output to help diagnose things. Config: [b]bash-3.00#
2009 Mar 11
9
ZFS on a SAN
Hi All, I''m new on ZFS, so I hope this isn''t too basic a question. I have a host where I setup ZFS. The Oracle DBAs did their thing and I know have a number of ZFS datasets with their respective clones and snapshots on serverA. I want to export some of the clones to serverB. Do I need to zone serverB to see the same LUNs as serverA? Or does it have to have preexisting,
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi, once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones. Thanks, budy -- This message posted from opensolaris.org
2008 Apr 29
4
Finding Pool ID
Folks, How can I find out zpool id without using zpool import? zpool list and zpool status does not have option as of Solaris 10U5.. Any back door to grab this property will be helpful. Thank you Ajay
2007 Apr 10
15
Poor man''s backup by attaching/detaching mirror drives on a _striped_ pool?
Hi, one quick&dirty way of backing up a pool that is a mirror of two devices is to zpool attach a third one, wait for the resilvering to finish, then zpool detach it again. The third device then can be used as a poor man''s simple backup. Has anybody tried it yet with a striped mirror? What if the pool is composed out of two mirrors? Can I attach devices to both mirrors, let them
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
Hi, yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98. I can''t use AI Installer because OpenPROM is version 3.27. So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it To make the disk bootable I used: installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0 using the executable from my new
2009 Dec 06
20
Accidentally added disk instead of attaching
Hi, I wanted to add a disk to the tank pool to create a mirror. I accidentally used zpool add ? instead of zpool attach ? and now the disk is added. Is there a way to remove the disk without loosing data? Or maybe change it to mirror? Thanks, Martijn -- This message posted from opensolaris.org
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys I just do the test for use loop device as vdev for zpool Procedures as followings: 1) mkfile -v 100m disk1 mkfile -v 100m disk2 2) lofiadm -a disk1 /dev/lofi lofiadm -a disk2 /dev/lofi 3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2 4) zpool export pool_1and2 5) zpool import pool_1and2 error info here: bash-3.00# zpool import pool1_1and2 cannot import
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath abcxxxx # zpool import pool: emcpool1 id:
2011 May 28
7
Have my RMA... Now what??
I have a raidz2 pool with one disk that seems to be going bad, several errors are noted in iostat. I have an RMA for the drive, however - no I am wondering how I proceed. I need to send the drive in and then they will send me one back. If I had the drive on hand, I could do a zpool replace. Do I do a zpool offline? zpool detach? Once I get the drive back and put it in the same drive bay..
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2010 Apr 06
15
Why we wont use zpool ever again
Hi everyone, Just wanted to tell you a little story. We''ve been enthusiastic puppet users since about a year ago here at the Geographic Institute of the University of Zürich. But we won''t use the zpool type ever again. Its just not worth it. Here''s what happened: . one of our servers lost knowledge about one of its zfs pools . puppet didn''t find the pool
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) . Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted