Displaying 20 results from an estimated 3000 matches similar to: "Panic when ZFS pool goes down?"
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi,
I just found out that ZFS triggers a kernel-panic while switching a mounted volume
into read-only mode:
The system is attached to a Symmetrix, all zfs-io goes through Powerpath:
I ran some io-intensive stuff on /tank/foo and switched the device into
read-only mode at the same time (symrdf -g bar failover -establish).
ZFS went ''bam'' and triggered a Panic:
WARNING: /pci at
2008 Jan 28
4
? Removing a disk from a ZFS Storage Pool
Hi
my understanding is that you cannot remove a disk from a ZFS storage
pool once you have added it...but I also think I saw an email from Jeff
B saying that the ability to depopulate a disk so that it can be
removed is being worked on....or was I dreaming ?
What is the status of this ?
Thanks
Tim
--
Signature
Tim Thomas
Staff Engineer
Storage
Systems Product Group
2008 Jan 31
5
Solaris DomU HVM Network issues
Hi,
I''m running Nevada 75a configured as dom0. I added a domU HVM running
Nevada 75a, on a Sun X4600 with 8 ADM Dual cores. I can get the domU
installed and configured, however, I can not get any network
connectivity. I followed the steps accoding to documents on Open
Solaris. I also followed some of the threads of the list to verify the
configuration, but I can not fix the
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2015 Feb 03
3
Very slow disk I/O
On 2/2/2015 8:11 PM, Jatin Davey wrote:
> disk 252:1 | 0-0-0 | 9XG7TNQVST91000640NS CC03 | Online, Spun Up
> disk 252:2 | 0-0-1 | 9XG4M4X3ST91000640NS CC03 | Online, Spun Up
> disk 252:3 | 0-1-1 | 9XG4LY7JST91000640NS CC03 | Online, Spun Up
> disk 252:4 | 0-1-0 | 9XG51233ST91000640NS CC03 | Online, Spun Up
> End of Output **************************
>
> Let me know if i need to
2017 Nov 03
2
low end file server with h/w RAID - recommendations
John R Pierce wrote:
> On 11/2/2017 9:21 AM, hw wrote:
>> Richard Zimmerman wrote:
>>> hw wrote:
>>>> Next question: you want RAID, how much storage do you need? Will 4 or 8 3.5" drives be enough (DO NOT GET crappy 2.5" drives - they're *much* more expensive than the 3.5" drives, and >smaller disk space. For the price of a 1TB 2.5", I can
2012 Sep 19
2
self-encrypting drives
whats the state of support for self-encrypting drives in CentOS 6 ?
these are becoming increasingly common on both laptops and for
enterprise storage (particularlly nearline), with features like
instant-erase via key destruction.
--
john r pierce N 37, W 122
santa cruz ca mid-left coast
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List!
I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck?
Also, if
2009 Oct 09
22
Does ZFS work with SAN-attached devices?
Hi All,
Its been a while since I touched zfs. Is the below still the case with zfs and hardware raid array? Do we still need to provide two luns from the hardware raid then zfs mirror those two luns?
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
Thanks,
Shawn
--
This message posted from opensolaris.org
2008 Jan 25
2
Capturing credentials for imap sync
Hi List
All the imap sync apps I could find requires the username /password
credentails to be known before a sync occurs. I have Dovecot using ldap
acting as a nearline backup mail server to MS Exchange. Every hour
imapsync syncs mail between Exchange and Dovecot. This all works fine
becuase the users credentials are known, but when new users are added I
would like the process to work
2007 Sep 06
0
Zfs with storedge 6130
On 9/4/07 4:34 PM, "Richard Elling" <Richard.Elling at Sun.COM> wrote:
> Hi Andy,
> my comments below...
> note that I didn''t see zfs-discuss at opensolaris.org in the CC for the
> original...
>
> Andy Lubel wrote:
>> Hi All,
>>
>> I have been asked to implement a zfs based solution using storedge 6130 and
>> im chasing my own
2009 Feb 04
8
Data loss bug - sidelined??
In August last year I posted this bug, a brief summary of which would be that ZFS still accepts writes to a faulted pool, causing data loss, and potentially silent data loss:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6735932
There have been no updates to the bug since September, and nobody seems to be assigned to it. Can somebody let me know what''s happening with this
2013 May 04
4
Scrub CPU usage ...
I just subscribed to this list so in case this subject has already been
discussed at length, my apologies. I have been waiting for btrfs
forever. I have been waiting for it to become reasonably stable. In
the wake of escalating problems with my old hardware RAID setup, I
decided now was the time to make the transition. At this point I have
been completely transitioned to btrfs for nearly
2006 Dec 11
6
Can''t destroy corrupted pool
Ok, so I''m planning on wiping my test pool that seems to have problems
with non-spare disks being marked as spares, but I can''t destroy it:
# zpool destroy -f zmir
cannot iterate filesystems: I/O error
Anyone know how I can nuke this for good?
Jim
This message posted from opensolaris.org
2011 May 20
2
scsi3 persistent reservations in cluster storage fencing
I'm interested in the idea of sharing a bunch of SAS JBOD devices
between two CentOS servers in an active-standby HA cluster sort of
arrangement, and found something about using scsi3 persistent
reservations as a fencing method. I'm not finding a lot of specifics
about how this works, or how you configure two initiator systems on a
SAS chain. I don't have any suitable
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2017 Nov 03
0
low end file server with h/w RAID - recommendations
On 11/3/2017 1:31 AM, hw wrote:
>> 2.5" SAS drives spinning at 10k and 15k RPM are the performance
>> solution for online storage, like databases and so forth.?? also make
>> more sense for large arrays of SSDs, as they don't even come in
>> 3.5".??? With 2.5" you can pack more disks per U (24-25 2.5" per 2U
>> face, vs 12 3.5" max per
2008 Feb 18
4
ZFS error handling - suggestion
Howdy,
I have at several times had issues with consumer grade PC hardware and ZFS not getting along. The problem is not the disks but the fact I dont have ECC and end to end checking on the datapath. What is happening is that random memory errors and bit flips are written out to disk and when read back again ZFS reports it as a checksum failure:
pool: myth
state: ONLINE
status: One or more
2010 Feb 08
1
Big send/receive hangs on 2009.06
So, I was running my full backup last night, backing up my main data
pool zp1, and it seems to have hung.
Any suggestions for additional data gathering?
-bash-3.2$ zpool status zp1
pool: zp1
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using ''zpool
2007 Dec 03
31
How to enable 64bit solaris guest on top of solaris dom0
I can enabling 32bit solaris guest on top of solaris dom0, but I don''t
know how to enable 64bit solaris guest on top of solaris dom0. what
configuration I need to modify?