similar to: How to unmount when devices write-disabled?

Displaying 20 results from an estimated 50000 matches similar to: "How to unmount when devices write-disabled?"

2007 Jan 10
1
Solaris 10 11/06
Now that Solaris 10 11/06 is available, I wanted to post the complete list of ZFS features and bug fixes that were included in that release. I''m also including the necessary patches for anyone wanting to get all the ZFS features and fixes via patches (NOTE: later patch revision may already be available): Solaris 10 Update 3 (11/06) Patches sparc Patches * 118833-36 SunOS 5.10:
2007 Nov 16
0
ZFS children stepping on parent
I was doing some disaster recovery testing with ZFS, where I did a mass backup of a family of ZFS filesystems using snapshots, destroyed them, and then did a mass restore from the backups. The ZFS filesystems I was testing with had only one parent in the ZFS namespace; and the backup and restore went well until it came time to mount the restored ZFS filesystems. Because I had destroyed
2008 Jun 20
1
zfs corruption...
Hi all, It would appear that I have a zpool corruption issue to deal with... pool is exported, but upon trying to import it, server panics.  Are there any tools available on a zpool that is in an exported state?  I''ve got a separate test bed in which I''m trying to recreate, but I keep getting messages to the effect of need to import the pool first.  Suggestions? thanks Jay
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment with the backend storage being iSCSI-based, in part because of the possibilities for failover. In exploring things in our test environment, I have noticed that ''zpool import'' takes a fairly long time; about 35 to 45 seconds per pool. A pool import time this slow obviously has implications for how fast
2011 Apr 15
0
ocfs2 1.6 2.6.38-2-amd64 kernel panic when unmount
Hello We have an ocfs2 1.6 via drbd dual master. drbd0 -> sda7 (node1) drbd0 -> sda7 (node2) ocfs2 1.6 2.6.38-2-amd64 kernel panic when unmount. when unmount drbd0 on both node around the same time using dsh. umount -v /dev/drbd0 the umount process hangs for a while, 30mins or so pts/0 D+ 20:50 0:00 umount /dev/drbd0 -vvvvv Then, one of the node kernel panics Message from
2010 May 20
0
zfs unmount returns with Invalid Argument
Anyone have any idea on this. I wanted to separate out my VirtualBox VDIs so that I could activate compression on the rest of the parent directory structure so I created a ZFS filesystem under my user directory. mv .VirtualBox .VirtualBox_orig zfs create /export/home/user/.VirtualBox zfs create /export/home/user/.VirtualBox/VDI zfs set compression=off /export/home/user/.VirtualBox/VDI zfs set
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all, I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the following command: # zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0 It worked fine, but I was slightly confused by the size yield (99 GB vs the 116 GB I had on my other RAID-Z1 pool of same-sized disks). I thought one of the disks might have been to blame, so I tried swapping it out
2009 Feb 11
0
failmode= continue prevents zpool processes from hanging and being unkillable?
> Dear ZFS experts, > somehow one of my zpools got corrupted. Symptom is that I cannot > import it any more. To me it is of lesser interest why that happened. > What is really challenging is the following. > > Any effort to import the zpool hangs and is unkillable. E.g. if I > issue a "zpool import test2-app" the process hangs and cannot be > killed. As this
2011 Jan 26
2
how to unmount an NFS share when the NFS server is unavailable?
Hi All, How do I unmount an NFS share when the NFS server is unaivalable? I tried "umount /bck" but it "hangs" indefinitely "umount -f /bck" tells me the mount if busy and I can't unmount it: root at saturn:[~]$ umount -f /bck umount2: Device or resource busy umount: /bck: device is busy umount2: Device or resource busy umount: /bck: device is busy This
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08, on a SunFire t5220, and this is our first rollout of ZFS and Zpools. Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0) Created Zpool my_pool as RaidZ using 5 disks + 1 spare: c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0 I am working on alerting & recovery plans for disks failures in the zpool. As a test, I have pulled disk
2009 Feb 12
4
Two zvol devices one volume?
Hi, Can anyone explain the following to me? Two zpool devices points at the same data, I was installing osol 2008.11 in xVM when I saw that there already was a partition on the installation disk. An old dataset that I deleted since i gave it a slightly different name than I intended is not removed under /dev. I should not have used that name, but two device links should perhaps not
2005 Nov 28
1
Administration Guide bug?
Chapter 9, "Troubleshooting and Data Recovery", p.115 states: "To enable background scrubbing, use the zpool set command: # zpool set scrub=2w The parameter is a time duration indicating how often a complete scrub should be performed. In this case, the administrator is requesting that the pool be scrubbed once every two weeks. ZFS automatically tries to schedule I/O to even
2015 Dec 30
2
Centos 7 guest - long delay on mounting /boot with host disk write cache off
Hello, I've noticed a strange delay while booting a CentOS 7 guest on a CentOS 7 host with slow disks (7200RPM) with write cache off. The guest and host are freshly installed Centos 7 (host was fully patched before guest install). Guest is installed on an lvm pool residing on an md raid1 with two SATA 7200 RPM drives with their write caches off. The delay is on mounting /boot, the dmesg
2010 Oct 04
3
hot spare remains in use
Hi, I had a hot spare used to replace a failed drive, but then the drive appears to be fine anyway. After clearing the error it shows that the drive was resilvered, but keeps the spare in use. zpool status pool2 pool: pool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 raidz2
2010 May 20
2
reconstruct recovery of rpool zpool and zfs file system with bad sectors
Folks I posted this question on (OpenSolaris - Help) without any replies http://opensolaris.org/jive/thread.jspa?threadID=129436&tstart=0 and am re-posting here in the hope someone can help ... I have updated the wording a little too (in an attempt to clarify) I currently use OpenSolaris on a Toshiba M10 laptop. One morning the system wouldn''t boot OpenSolaris 2009.06 (it was simply
2007 Jun 15
3
zfs and EMC
Hi there, have a strange behavior if i?ll create a zfs pool at an EMC PowerPath pseudo device. I can create a pool on emcpower0a but not on emcpower2a zpool core dumps with invalid argument .... ???? Thats my second maschine with powerpath and zfs the first one works fine, even zfs/powerpath and failover ... Is there anybody who has the same failure and a solution ? :) Greets Dominik
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334 Summary: zpool destroy panics after zfs_force_umount_stress Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other AssignedTo:
2008 Jun 04
0
CTDB problems: 1) Unable to get tcp info for CTDB_CONTROL_TCP_CLIENT, 2) ctdb disable doesn't failover
greetings, trying to follow tridge's failover process at http://samba.org/~tridge/ctdb_movies/node_disable.html I encounter this error. oss02:~ # smbstatus -np Processing section "[homes]" Processing section "[profiles]" Processing section "[users]" Processing section "[groups]" Processing section "[local]" Processing section
2008 Dec 18
3
automatic forced zpool import with unmatched hostid
Hi, since hostid is stored in the label, "zpool import" failed if the hostid dind''t match. Under certain circonstances (ldom failover) it means you have to manually force the zpool import while booting. With more than 80 LDOMs on a single host it will be great if we could configure the machine back to the old behavior where it didn''t failed, maybe with a /etc/sytem
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57