similar to: Do multiple pools compete for memory?

Displaying 20 results from an estimated 10000 matches similar to: "Do multiple pools compete for memory?"

2009 Aug 27
0
How are you supposed to remove faulted spares from pools?
We have a situation where all of the spares in a set of pools have gone into a faulted state and now, apparently, we can''t remove them or otherwise de-fault them. I''m confidant that the underlying disks are fine, but ZFS seems quite unwilling to do anything with the spares situation. (The specific faulted state is ''FAULTED corrupted data'' in ''zpool
2008 Mar 12
5
[Bug 752] New: zfs set keysource no longer works on existing pools
http://defect.opensolaris.org/bz/show_bug.cgi?id=752 Summary: zfs set keysource no longer works on existing pools Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: blocker Priority: P1 Component: other AssignedTo:
2009 Nov 02
2
How do I protect my zfs pools?
Hi, I may have lost my first zpool, due to ... well, we''re not yet sure. The ''zpool import tank'' causes a panic -- one which I''m not even able to capture via savecore. I''m glad this happened when it did. At home I am in the process of moving all my data from a Linux NFS server to OpenSolaris. It''s something I''d been meaning to do
2006 Jul 18
1
file access algorithm within pools
Hello, What is the access algorithm used within multi-component pools for a given pool, and does it change when one or more members of the pool become degraded ? examples: zpool create mtank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 mirror c5t0d0 c6t0d0 or; zpool create ztank raidz c1t0d0 c2t0d0 c3t0d0 raidz c4t0d0 c5t0d0 c6t0d0 As files are created on the filesystem within these pools,
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello, Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation. I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication. To make a long story short, when - a disk contains 2 partitions (p1=32GB, p2=1800 GB) and - p1 is used as part of a zfs mirror of rpool
2007 Jul 12
3
How to list pools that are not imported
Hi all, again might be a FAQ, but imagine that I have a pool on USB stick, I insert the stick, and how can I figure out what poools are available for ''zpool import'' without knowing their name? zpool list does not seem to be listing those, thanx, Martin
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
Greetings, my Opensolaris 06/2009 installation on an Thinkpad x60 notebook is a little unstable. From the symptoms during installation it seems that there might be something with the ahci driver. No problem with the Opensolaris LiveCD system. Some weeks ago during copy of about 2 GB from a USB stick to the zfs filesystem, the system froze and afterwards refused to boot. Now when investigating
2008 Nov 17
3
slice overlap error when creating pools
Hi, As I was experimenting with snv101, I discovers that every attempt to create a zpool gives this error: #zpool create pool1 c1d1s0 invalid vdev specification use ''-f'' to override the following errors: /dev/dsk/c1d1s0 overlaps with /dev/dsk/c1d1s2 I am testing with vmware, so I used the same virtual disk in a snv77 install, and I don''t get any error.
2009 Apr 08
2
ZFS data loss
Hi, I have lost a ZFS volume and I am hoping to get some help to recover the information ( a couple of months worth of work :( ). I have been using ZFS for more than 6 months on this project. Yesterday I ran a "zvol status" command, the system froze and rebooted. When it came back the discs where not available. See bellow the output of " zpool status", "format"
2009 Jan 23
2
zpool import fails to find pool
Hi all, I moved from Sol 10 Update4 to update 6. Before doing this I exported both of my zpools, and replace the discs containing the ufs root on with two new discs (these discs did not have any zpool /zfs info and are raid mirrored in hardware) Once I had installed update6 I did a zpool import, but it only shows (and was able to) import one of the two pools. Looking at dmesg it appears as
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2009 Dec 19
2
Zfs upgrade freezes desktop
On snv_129, a zfs upgrade (*not* a zpool upgrade) from version 3 to version 4 caused the desktop to freeze - no response to keyboard or mouse events and clock not updated. ermine% uname -a SunOS ermine 5.11 snv_129 i86pc i386 i86pc ermine% zpool upgrade This system is currently running ZFS pool version 22. The following pools are out of date, and can be upgraded. After being upgraded, these
2009 Oct 27
2
root pool can not have multiple vdevs ?
This seems like a bit of a restriction ... is this intended ? # cat /etc/release Solaris Express Community Edition snv_125 SPARC Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 05 October 2009 # uname -a SunOS neptune 5.11 snv_125 sun4u sparc SUNW,Sun-Fire-880 #
2008 May 18
2
possible zfs bug? lost all pools
after trying to mount my zfs pools in single user mode I got the following message for each: May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as it was last accessed by another system (host: gw.bb1.matik.com.br hostid: 0xbefb4a0f). See: http://www.sun.com/msg/ZFS-8000-EY any zpool cmd returned nothing else as not existing zfs, seems the zfs info on disks
2005 Jul 08
1
Re: Hot swap CPU -- shared memory (1 NUMA/UPA) v. clustered (4 MCH)
From: Bruno Delbono <bruno.s.delbono at mail.ac> > I'm really sorry to start this thread again but I found something very > interesting I thought everyone should ^at least^ have a look at: > http://uadmin.blogspot.com/2005/06/4-dual-xeon-vs-e4500.html > This article takes into account a comparision of 4 dual xeon vs. e4500. > The author (not me!) talks about "A
2007 Sep 18
1
zfs-discuss Digest, Vol 23, Issue 34
Hello, I am a final year computer engg student and I am planning to implement zfs on linux, I have gone through the articles posted on solaris . Please let me know about the feasibility of zfs to be implemented on linux. waiting for valuable replies. thanks in advance. On 9/14/07, zfs-discuss-request at opensolaris.org <zfs-discuss-request at opensolaris.org> wrote: > Send
2009 Nov 02
0
Kernel panic on zfs import (hardware failure)
Hey, On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote: > Donald Murray, P.Eng. wrote: >> >> Hi, >> >> I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever >> I try to import one of my pools. What''s the best practice for >> recovering (before I resort to nuking the pool and
2008 Feb 15
2
[storage-discuss] Preventing zpool imports on boot
On Thu, Feb 14, 2008 at 11:17 PM, Dave <dave-opensolaris at dubkat.com> wrote: > I don''t want Solaris to import any pools at bootup, even when there were > pools imported at shutdown/at crash time. The process to prevent > importing pools should be automatic and not require any human > intervention. I want to *always* import the pools manually. > > Hrm... what
2008 Apr 29
4
Finding Pool ID
Folks, How can I find out zpool id without using zpool import? zpool list and zpool status does not have option as of Solaris 10U5.. Any back door to grab this property will be helpful. Thank you Ajay
2006 Jun 22
1
zfs snapshot restarts scrubbing?
Hi, yesterday I implemented a simple hourly snapshot on my filesystems. I also regularly initiate a manual "zpool scrub" on all my pools. Usually the scrubbing will run for about 3 hours. But after enabling hourly snapshots I noticed that zfs scrub is always restarted if a new snapshot is created - so basically it will never have the chance to finish: # zpool scrub scratch # zpool