similar to: Zpooling problems

Displaying 20 results from an estimated 50000 matches similar to: "Zpooling problems"

2007 Jan 28
2
bug id 6381203
Hello, what is the status of the bug 6381203 fix in S10 u3 ? ("deadlock due to i/o while assigning (tc_lock held)") Was it integrated? Is there a patch? Thanks, [i]-- leon[/i] This message posted from opensolaris.org
2008 Feb 15
2
[storage-discuss] Preventing zpool imports on boot
On Thu, Feb 14, 2008 at 11:17 PM, Dave <dave-opensolaris at dubkat.com> wrote: > I don''t want Solaris to import any pools at bootup, even when there were > pools imported at shutdown/at crash time. The process to prevent > importing pools should be automatic and not require any human > intervention. I want to *always* import the pools manually. > > Hrm... what
2010 Jan 21
1
Zpool is a bit Pessimistic at failures
Hello, Anyone else noticed that zpool is kind of negative when reporting back from some error conditions? Like: cannot import ''zpool01'': I/O error Destroy and re-create the pool from a backup source. or even worse: cannot import ''rpool'': pool already exists Destroy and re-create the pool from a backup source. The first one i
2007 Jul 26
4
Does iSCSI target support SCSI-3 PGR reservation ?
Does opensolaris iSCSI target support SCSI-3 PGR reservation ? My goal is to use the iSCSI LUN created by [1] or [2] as a quorum device for a 3-node suncluster. [1] zfs set shareiscsi=on <storage-pool/zfs volume name> [2] iscsitadm create target ..... Thanks, -- leon This message posted from opensolaris.org
2007 Sep 18
1
zfs-discuss Digest, Vol 23, Issue 34
Hello, I am a final year computer engg student and I am planning to implement zfs on linux, I have gone through the articles posted on solaris . Please let me know about the feasibility of zfs to be implemented on linux. waiting for valuable replies. thanks in advance. On 9/14/07, zfs-discuss-request at opensolaris.org <zfs-discuss-request at opensolaris.org> wrote: > Send
2007 Jul 25
3
Any fix for zpool import kernel panic (reboot loop)?
My system (a laptop with ZFS root and boot, SNV 64A) on which I was trying Opensolaris now has the zpool-related kernel panic reboot loop. Booting into failsafe mode or another solaris installation and attempting: ''zpool import -F rootpool'' results in a kernel panic and reboot. A search shows this type of kernel panic has been discussed on this forum over the last year.
2010 Jan 22
0
Removing large holey file does not free space 6792701 (still)
Hello, I mentioned this problem a year ago here and filed 6792701 and I know it has been discussed since. It should have been fixed in snv_118, but I can still trigger the same problem. This is only triggered if the creation of a large file is aborted, for example by loss of power, crash or SIGINT to mkfile(1M). The bug should probably be reopened but I post it here since some people where
2007 Apr 23
5
Re: [nfs-discuss] Multi-tera, small-file filesystems
On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote: > Hello, > > I''d like to plan a storage solution for a system currently in > production. > > The system''s storage is based on code which writes many files to > the file system, with overall storage needs currently around 40TB > and expected to reach hundreds of TBs. The average file size of the >
2008 Jan 02
1
Adding to zpool: would failure of one device destroy all data?
I didn''t find any clear answer in the documentation, so here it goes: I''ve got a 4-device RAIDZ array in a pool. I then add another RAIDZ array to the pool. If one of the arrays fails, would all the data on the array be lost, or would it be like disc spanning, and only the data on the failed array be lost? Thanks in advance. This message posted from opensolaris.org
2008 Feb 25
3
[Bug 631] New: zpool get with no pool name dumps core in zfs-crypto
http://defect.opensolaris.org/bz/show_bug.cgi?id=631 Summary: zpool get with no pool name dumps core in zfs-crypto Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: minor Priority: P4 Component: other AssignedTo:
2008 Mar 13
3
[Bug 759] New: ''zpool create -o keysource=,'' hanged
http://defect.opensolaris.org/bz/show_bug.cgi?id=759 Summary: ''zpool create -o keysource=,'' hanged Classification: Development Product: zfs-crypto Version: unspecified Platform: i86pc/i386 OS/Version: Solaris Status: NEW Severity: minor Priority: P3 Component: other
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334 Summary: zpool destroy panics after zfs_force_umount_stress Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other AssignedTo:
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings, my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system. Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot. Now when investigating
2006 Jun 28
2
ZFS root install
I have posted a blog http://solaristhings.blogspot.com/ on how I have configured a zfs root partition on my laptop. It is a slightly modified version of Tabriz''s blog http://blogs.sun.com/roller/page/tabriz?entry=are_you_ready_to_rumble The main differences, is that I only require a very small ufs partition for grub, and I detail how to use a zfs clone as a test root partition. Doug
2008 May 22
1
[Bug 2017] New: zpool key -l fails on "first'' load.
http://defect.opensolaris.org/bz/show_bug.cgi?id=2017 Summary: zpool key -l fails on "first'' load. Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Other Status: NEW Severity: minor Priority: P4 Component: other AssignedTo: darrenm
2007 Feb 03
4
Which label a ZFS/ZPOOL device has ? VTOC or EFI ?
Hi All, ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs on it. Is it true ? I formatted a device with VTOC lable and I created a ZFS file system on it. Now which label the ZFS device has ? is it old VTOC or EFI ? After creating the ZFS file system on a VTOC labeled disk, I am seeing the following warning messages. Feb 3 07:47:00 scoobyb
2011 Jul 26
2
recover zpool with a new installation
Hi all, I lost my storage because rpool don''t boot. I try to recover, but opensolaris says to "destroy and re-create". My rpool installed on flash drive, and my pool (with my info) it''s on another disks. My question is: It''s possible I reinstall opensolaris in new flash drive, without stirring on my pool of disks, and recover this pool? Thanks. Regards, --
2008 Mar 10
2
[Bug 701] New: ''zpool create -o keysource='' fails on sparc - invalid argument
http://defect.opensolaris.org/bz/show_bug.cgi?id=701 Summary: ''zpool create -o keysource='' fails on sparc - invalid argument Classification: Development Product: zfs-crypto Version: unspecified Platform: SPARC/sun4u OS/Version: Solaris Status: NEW Severity: minor Priority:
2010 Jan 17
2
Root Mirror - Permission Denied
I have a system that I''m trying to bring up with a mirrored rpool. I''m using DarkStar''s ZFS Root Mirror blog post as a guide (http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html). When I get to step 3 I execute: pfexec prtvtoc /dev/rdsk/c7d0s2 | fmthard -s - /dev/rdsk/c7d1s2 I get: fmthard: Cannot open device /dev/rdsk/c7d1s2 - Permission denied Any
2010 May 24
0
zpool export takes too long time in build-134
Hi, I did the zpool import/export performance testing on opensolaris build-134: 1). Create 100 zfs and 100 snapshots, then do zpool export/import export takes about 5 seconds import takes about 5 seconds 2). Create 200 zfs and 200 snapshots, then do zpool export/import export takes about 80 seconds import takes about 12 seconds 3). Create 300 zfs and 300 snapshots, then do