similar to: zpool import

Displaying 20 results from an estimated 7000 matches similar to: "zpool import"

2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi, We are seeing more long delays in zpool import, say, 4~5 or even 25~30 minutes, especially when backup jobs are going on in the FC SAN the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array, some pools takes a few seconds, but minutes for some. the pattern seems random to me so far. It''s first noticed soon after being upgraded to Solaris 10 U6
2009 Aug 27
0
How are you supposed to remove faulted spares from pools?
We have a situation where all of the spares in a set of pools have gone into a faulted state and now, apparently, we can''t remove them or otherwise de-fault them. I''m confidant that the underlying disks are fine, but ZFS seems quite unwilling to do anything with the spares situation. (The specific faulted state is ''FAULTED corrupted data'' in ''zpool
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron. Zpool scrub runs fine from the command line, no errors. The freeze happens within 30 seconds of the zpool scrub happening. The one core dump I succeeded in taking showed an arccache eating up all the ram. The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s been patched and seems to have
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334 Summary: zpool destroy panics after zfs_force_umount_stress Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other AssignedTo:
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment with the backend storage being iSCSI-based, in part because of the possibilities for failover. In exploring things in our test environment, I have noticed that ''zpool import'' takes a fairly long time; about 35 to 45 seconds per pool. A pool import time this slow obviously has implications for how fast
2010 Sep 01
0
stmf corruption and dealing with dynamic lun mapping
I am running Nexenta NCP 3.0 (134f). My stmf configuration was corrupted. I was getting errors like in /var/adm/messages: Sep 1 10:32:04 llift-zfs1 svc-stmf[378]: [ID 130283 user.error] get property view_entry-0/all_hosts failed - entity not found Sep 1 10:32:04 llift-zfs1 svc.startd[9]: [ID 652011 daemon.warning] svc:/system/stmf:default: Method "/lib/svc/method/svc-stmf start"
2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi, I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving. I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''. During zpool import I am getting a non-zero exit code,
2008 Nov 25
2
Can a zpool cachefile be copied between systems?
Suppose that you have a SAN environment with a lot of LUNs. In the normal course of events this means that ''zpool import'' is very slow, because it has to probe all of the LUNs all of the time. In S10U6, the theoretical ''obvious'' way to get around this for your SAN filesystems seems to be to use a non-default cachefile (likely one cachefile per virtual
2011 Jan 04
0
zpool import hangs system
Hello, I''ve been using Nexentastore Community Edition with no issues now for a while now, however last week I was going to rebuild a different system so I started to copy all the data off that to my to a raidz2 volume on me CE system. This was going fine until I noticed that they copy was stalled, as well as the entire system was non-responsive. I let it sit for several hours with no
2012 Jun 13
0
ZFS NFS service hanging on Sunday morning problem
> > Shot in the dark here: > > What are you using for the sharenfs value on the ZFS filesystem? Something like rw=.mydomain.lan ? They are IP blocks or hosts specified as FQDNs, eg., pptank/home/tcrane sharenfs rw=@192.168.101/24,rw=serverX.xx.rhul.ac.uk:serverY.xx.rhul.ac.uk > > I''ve had issues where a ZFS server loses connectivity to the primary DNS server and
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2007 Feb 13
1
Zpool complain about missing devices
Hello, We had a situation at customer site where one of the zpool complains about missing devices. We do not know which devices are missing. Here are the details: Customer had a zpool created on a hardware raid(SAN). There is no redundancy in the pool. Pool had 13 LUN''s, customer wanted to increase the size of and added 5 more Luns. During zpool add process system paniced with zfs
2010 Jun 30
1
zfs rpool corrupt?????
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in
2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
Hi, I''m struggling to get a stable ZFS replication using Solaris 10 110/06 (actual patches) and AVS 4.0 for several weeks now. We tried it on VMware first and ended up in kernel panics en masse (yes, we read Jim Dunham''s blog articles :-). Now we try on the real thing, two X4500 servers. Well, I have no trouble replicating our kernel panics there, too ... but I think I
2008 Nov 16
4
[ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM
I''ve tried using S10 U6 to reinstall the boot file (instead of U5) over jumpstart as its a ldom, noticed a another error. Boot device: /virtual-devices at 100/channel-devices at 200/network at 0 File and args: -s Requesting Internet Address for 0:14:4f:f9:84:f3 boot: cannot open kernel/sparcv9/unix Enter filename [kernel/sparcv9/unix]: Has anyone seen this error on U6 jumpstart or is
2011 Jan 27
0
Move zpool to new virtual volume
Hello all, I want to reorganize the virtual disk/ storage pool /volume layout on a StorageTek 6140 with two CSM200 expansion units attached (for example stripe LUNs across trays, which is not the case at the moment). On a data server I have a zpool "pool1" over one of the volumes on the StorageTek. The zfs file systems in the pool are mounted locally and exported via nfs to clients. Now
2010 Feb 24
0
disks in zpool gone at the same time
Hi, Yesterday I got all my disks in two zpool disconected. They are not real disks - LUNS from StorageTek 2530 array. What could that be - a failing LSI card or a mpt driver in 2009.06? After reboot got four disks in FAILED state - zpool clear fixed things with resilvering. Here is how it started (/var/adm/messages) Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info] /pci at 0,0/pci10de,5d at
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys I just do the test for use loop device as vdev for zpool Procedures as followings: 1) mkfile -v 100m disk1 mkfile -v 100m disk2 2) lofiadm -a disk1 /dev/lofi lofiadm -a disk2 /dev/lofi 3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2 4) zpool export pool_1and2 5) zpool import pool_1and2 error info here: bash-3.00# zpool import pool1_1and2 cannot import
2011 Jul 12
1
Can zpool permanent errors fixed by scrub?
Hi, we had a server that lost connection to fiber attached disk array where data luns were housed, due to 3510 power fault. After connection restored alot of the zpool status had these permanent errors listed as per below. I check the files in question and as far as I could see they were present and ok. I ran a zpool scrub against other zpools and they came back with no errors and the list of