similar to: this command can cause zpool coredump!

Displaying 20 results from an estimated 2000 matches similar to: "this command can cause zpool coredump!"

2007 Nov 13
3
zpool status can not detect the vdev removed?
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.dat ONLINE 0 0 0 /export/f3.dat ONLINE 0 0 0 spares
2007 Nov 13
0
in a zpool consist of regular files, when I remove the file vdev, zpool status can not detect?
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.dat ONLINE 0 0 0 /export/f3.dat ONLINE 0 0 0 spares
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy). My question is what''s the best approach to moving the ZFS
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss, Relatively low traffic to the pool but sync takes too long to complete and other operations are also not that fast. Disks are on 3510 array. zil_disable=1. bash-3.00# ptime sync real 1:21.569 user 0.001 sys 0.027 During sync zpool iostat and vmstat look like: f3-1 504G 720G 370 859 995K 10.2M misc 20.6M 52.0G 0 0
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2006 Aug 18
4
ZFS Filesytem Corrpution
Hi, I have been seeing data corruption on the ZFS filesystem. Here are some details. The machine is running s10 on X86 platform with a single 160Gb SATA disk. (root on s0 and zfs on s7) ...Sanjaya --------- /etc/release ---------- -bash-3.00# cat /etc/release Solaris 10 6/06 s10x_u2wos_09a X86 Copyright 2006 Sun Microsystems, Inc. All Rights
2007 Feb 03
4
Which label a ZFS/ZPOOL device has ? VTOC or EFI ?
Hi All, ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs on it. Is it true ? I formatted a device with VTOC lable and I created a ZFS file system on it. Now which label the ZFS device has ? is it old VTOC or EFI ? After creating the ZFS file system on a VTOC labeled disk, I am seeing the following warning messages. Feb 3 07:47:00 scoobyb
2007 Sep 17
2
zpool create -f not applicable to hot spares
Hello zfs-discuss, If you do ''zpool create -f test A B C spare D E'' and D or E contains UFS filesystem then despite of -f zpool command will complain that there is UFS file system on D. workaround: create a test pool with -f on D and E, destroy it and that create first pool with D and E as hotspares. I''ve tested it on s10u3 + patches - can someone confirm
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi. System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1 We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()). It''s not a problem with network, there''s also plenty oc CPU available. Storage isn''t saturated either. First strange thing - normally on that server nfsd has about 1500-2500 number of threads. I did
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object? Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used? Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2
2010 Mar 19
3
zpool I/O error
Hi all, I''m trying to delete a zpool and when I do, I get this error: # zpool destroy oradata_fs1 cannot open ''oradata_fs1'': I/O error # The pools I have on this box look like this: #zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradata_fs1 532G 119K 532G 0% DEGRADED - rpool 136G 28.6G 107G 21% ONLINE - # Why
2007 Feb 18
7
Zfs best practice for 2U SATA iSCSI NAS
Is there a best practice guide for using zfs as a basic rackable small storage solution? I''m considering zfs with a 2U 12 disk Xeon based server system vs something like a second hand FAS250. Target enviroment is mixature of Xen or VI hosts via iSCSI and nfs/cifs. Being able to take snapshots of running (or maybe paused) xen iscsi vols and re-export then for cloning and remote backup
2007 Jun 15
1
ZFS zpool created with MPxIO devices question
Customer asks: Will SunCluster 3.2 support ZFS zpool created with MPxIO devices instead of the corresponding DID devices? Will it cause any support issues? Thank you, James Lefebvre -- James Lefebvre - OS Technical Support james.lefebvre at Sun.com (800)USA-4SUN (Reference your Case Id #) Hours 8:00 - 5:00 EST Sun Support Services 4 Network Drive, UBUR04-105 Burlington MA
2007 Mar 16
8
ZFS checksum error detection
Hi all. A quick question about the checksum error detection routines in ZFS. Surely ZFS can decide about checksum errors in a redundant environment but what about an non-redundant one? We connected a single RAID5 array to a v440 as a NFS server and while doing backups and the like we see the "zpool status -v" checksum error counters increment once in a while. Nevertheless the
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2006 Jul 15
2
zvol of files for Oracle?
Hello zfs-discuss, What would you rather propose for ZFS+ORACLE - zvols or just files from the performance standpoint? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi, We are seeing more long delays in zpool import, say, 4~5 or even 25~30 minutes, especially when backup jobs are going on in the FC SAN the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array, some pools takes a few seconds, but minutes for some. the pattern seems random to me so far. It''s first noticed soon after being upgraded to Solaris 10 U6
2007 Jul 14
3
zfs list hangs if zfs send is killed (leaving zfs receive process)
I was in the process of doing a large zfs send | zfs receive when I decided that I wanted to terminate the the zfs send process. I killed it, but the zfs receive doesn''t want to die... In the meantime my zfs list command just hangs. Here is the tail end of the truss output from a "truss zfs list": ioctl(3, ZFS_IOC_OBJSET_STATS, 0x08043484) = 0 ioctl(3,
2007 Mar 23
2
ZFS ontop of SVM - CKSUM errors
Hi. bash-3.00# uname -a SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc I created first zpool (stripe of 85 disks) and did some simple stress testing - everything seems almost alright (~700MB seq reads, ~430 seqential writes). Then I destroyed pool and put SVM stripe on top the same disks utilizing the fact that zfs already put EFI and s0 represents almost entire disk. The on top on
2006 Dec 12
1
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3