similar to: zpool export takes too long time in build-134

Displaying 20 results from an estimated 40000 matches similar to: "zpool export takes too long time in build-134"

2009 Apr 08
0
zpool history coredump
Pawel, another one (though minor, I suppose) bug report: while playing with my poor pool, I tried to interact with it on -current, thus importing it with -f (without upgrading, of course). After reverting to RELENG_7, I found I no more can access history: root@moose:~# /usr/obj/usr/src/cddl/sbin/zpool/zpool history History for 'm': 2008-10-14.23:04:28 zpool create m raidz ad4h ad6h
2008 Oct 09
0
"zfs set sharenfs" takes a long time to return.
I have an X4500 fileserver (NFS, Samba) running OpenSolaris 2008.05 pkg upgraded to snv_91 with ~3200 filesystems (and ~27429 datasets, including snapshots). I''ve been encountering some pretty big slow-downs on this system when running certain zfs commands. The one causing me the most pain at the moment is setting the "sharenfs" property on a filesystem takes a little under 7
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi. T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1. Command ''zpool export f3-2'' is hung for 30 minutes now and still is going. Nothing else is running on the server. I can see one CPU being 100% in SYS like: bash-3.00# mpstat 1 [...] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 67 220 110 20 0 0 0 0
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334 Summary: zpool destroy panics after zfs_force_umount_stress Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other AssignedTo:
2008 Mar 13
3
[Bug 759] New: ''zpool create -o keysource=,'' hanged
http://defect.opensolaris.org/bz/show_bug.cgi?id=759 Summary: ''zpool create -o keysource=,'' hanged Classification: Development Product: zfs-crypto Version: unspecified Platform: i86pc/i386 OS/Version: Solaris Status: NEW Severity: minor Priority: P3 Component: other
2007 Jul 27
0
cloning disk with zpool
Hello the list, I thought that it should be easy to do a clone (not in the term of zfs) of a disk with zpool. This manipulation is strongly inspired by http://www.opensolaris.org/jive/thread.jspa?messageID=135038 and http://www.opensolaris.org/os/community/zfs/boot/ But unfortunately this doesn''t work, and we do have no clue what could be wrong on c1d0 you have a zfs root create a
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings, my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system. Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot. Now when investigating
2008 Feb 25
3
[Bug 631] New: zpool get with no pool name dumps core in zfs-crypto
http://defect.opensolaris.org/bz/show_bug.cgi?id=631 Summary: zpool get with no pool name dumps core in zfs-crypto Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: minor Priority: P4 Component: other AssignedTo:
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all, I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2009 Jul 20
1
zpool import problem / missing label / corrupted data
After a power outage due to a thunder storm my 3 disk raidz1 pool has become UNAVAILable. It is a ZFV v13 pool using the whole 3 disks created on FreeBSD current 8 x64 and worked well for over a month. Unfortunately I wasn''t able to import the pool with neither a FreeBSD LiveCD or the current OpenSolaris LiveCD x86/x64. When I tried to import the pool with FreeBSD the system just hangs (I
2010 May 20
2
reconstruct recovery of rpool zpool and zfs file system with bad sectors
Folks I posted this question on (OpenSolaris - Help) without any replies http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am re-posting here in the hope someone can help ... I have updated the wording a little too (in an attempt to clarify) I currently use OpenSolaris on a Toshiba M10 laptop. One morning the system wouldn''t boot OpenSolaris 2009.06 (it was simply
2008 Mar 27
5
[Bug 871] New: ''zpool key -l'' core dumped with keysource=hex, prompt and unmatched entered in
http://defect.opensolaris.org/bz/show_bug.cgi?id=871 Summary: ''zpool key -l'' core dumped with keysource=hex,prompt and unmatched entered in Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Windows Status: NEW Severity: minor
2010 Jul 06
3
Help with Faulted Zpool Call for Help(Cross post)
Hello list, I posted this a few days ago on opensolaris-discuss@ list I am posting here, because there my be too much noise on other lists I have been without this zfs set for a week now. My main concern at this point,is it even possible to recover this zpool. How does the metadata work? what tool could is use to rebuild the corrupted parts or even find out what parts are corrupted. most but
2011 Aug 05
0
Kernel panic on zpool import. 200G of data inaccessible! assertion failed: zvol_get_stats(os, nv) == 0
System: snv_151a 64 bit on Intel. Error: panic[cpu0] assertion failed: zvol_get_stats(os, nv) == 0, file: ../../common/fs/zfs/zfs_ioctl.c, line: 1815 Failure first seen on Solaris 10, update 8 History: I recently received two 320G drives and realized from reading this list it would have been better if I would have done the install on the small drives but I didn''t have them at the time.
2008 May 22
1
[Bug 2017] New: zpool key -l fails on "first'' load.
http://defect.opensolaris.org/bz/show_bug.cgi?id=2017 Summary: zpool key -l fails on "first'' load. Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Other Status: NEW Severity: minor Priority: P4 Component: other AssignedTo: darrenm
2007 Jun 16
5
zpool mirror faulted
I have a strange problem with a faulted zpool (two way mirror): [root at einstein;0]~# zpool status poolm pool: poolm state: FAULTED scrub: none requested config: NAME STATE READ WRITE CKSUM poolm UNAVAIL 0 0 0 insufficient replicas mirror UNAVAIL 0 0 0 corrupted data c2t0d0s0 ONLINE 0
2010 Mar 02
11
Expand zpool capacity
Hello, Experts. I''ve got a problem. I''m trying to expand my main zpool (rpool), but don''t know how to do that. (i''m 100% newbie in non-windows world) I use Osol under Vmware on Windows. I had a pretty small vhdd -> only 12gb. Yesterday i decided to expand my virtual drive to 20gb. (After several tries to upgrade the OS to a newest dev-releases and
2010 Apr 21
2
HELP! zpool corrupted data
Hello, Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd: FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64 mfsbsd# zpool import pool: tank id: 1998957762692994918 state: FAULTED
2007 Jul 25
3
Any fix for zpool import kernel panic (reboot loop)?
My system (a laptop with ZFS root and boot, SNV 64A) on which I was trying Opensolaris now has the zpool-related kernel panic reboot loop. Booting into failsafe mode or another solaris installation and attempting: ''zpool import -F rootpool'' results in a kernel panic and reboot. A search shows this type of kernel panic has been discussed on this forum over the last year.
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57