similar to: Storage 7000

Displaying 20 results from an estimated 7000 matches similar to: "Storage 7000"

2008 Nov 10
9
Sun Storage 7000
Just got an email about this today. Fishworks finally unveiled? http://www.sun.com/launch/2008-1110/index.jsp -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081110/17591c2d/attachment.html>
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device "replacement" gone awry. I posted all about it here: http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html In a nutshell, an resilver of a single log device with itself, due to the fact one can''t remove a log device from a pool once defined, cause ZFS to fully resilver but then attach the log
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor grants for ZFS. Since all of the ZFS core contributors grants are set to expire on 02-24-2009 we need to renew the members that are still contributing at core contributor levels. We should also add some new members to both Contributor and Core contributor levels. First the current list of Core contributors: Bill
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some
2008 May 20
4
Ways to speed up ''zpool import''?
We''re planning to build a ZFS-based Solaris NFS fileserver environment with the backend storage being iSCSI-based, in part because of the possibilities for failover. In exploring things in our test environment, I have noticed that ''zpool import'' takes a fairly long time; about 35 to 45 seconds per pool. A pool import time this slow obviously has implications for how fast
2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives: fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 TIME UUID SUNW-MSG-ID Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2008 Mar 20
5
Snapshots silently eating user quota
All, I assume this issue is pretty old given the time ZFS has been around. I have tried searching the list but could not get understand the structure of how ZFS actually takes snapshot space into account. I have a user walter on whom I try to do the following ZFS operations bash-3.00# zfs get quota store/catB/home/walter NAME PROPERTY VALUE SOURCE
2006 Jul 17
28
Big JBOD: what would you do?
ZFS fans, I''m preparing some analyses on RAS for large JBOD systems such as the Sun Fire X4500 (aka Thumper). Since there are zillions of possible permutations, I need to limit the analyses to some common or desirable scenarios. Naturally, I''d like your opinions. I''ve already got a few scenarios in analysis, and I don''t want to spoil the brain storming, so
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev" device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would automatically be activated. But I''m finding that ZFS does not behave this way
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2006 May 16
3
ZFS snv_b39 and S10U2
Hello zfs-discuss, Just to be sure - if I create ZFS filesystems on snv_39 and then later I would want just to import that pool on S10U2 - can I safely assume it will just work (I mean nothing new to on-disk format was added or changed in last few snv releases which is not going to be in u2)? I want to put right now (I have to do it now) some data on ZFS and later I want to
2006 Jul 28
20
3510 JBOD ZFS vs 3510 HW RAID
Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion.
2009 Jun 18
7
7110 questions
Hi all, (down to the wire here on EDU grant pricing :) i''m looking at buying a pair of 7110''s in the EDU grant sale. The price is sure right. I''d use them in a mirrored, cold-failover config. I''d primarily be using them to serve a vmware cluster; the current config is two standalone ESX servers with local storage, 450G of SAS RAID10 each. the 7110 price
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi, I just found out that ZFS triggers a kernel-panic while switching a mounted volume into read-only mode: The system is attached to a Symmetrix, all zfs-io goes through Powerpath: I ran some io-intensive stuff on /tank/foo and switched the device into read-only mode at the same time (symrdf -g bar failover -establish). ZFS went ''bam'' and triggered a Panic: WARNING: /pci at
2008 Jun 07
2
Apple fixes DTrace
I''m pleased to pass along news that the Mac OS X DTrace port has been updated in 10.5.3 to fix the issue that caused timer based probes not to fire in the presence of certain untraceable applications. http://blogs.sun.com/ahl/entry/apple_updates_dtrace A big thank you to the folks at Apple for addressing the problem. Adam -- Adam Leventhal, Fishworks
2008 Feb 18
4
ZFS error handling - suggestion
Howdy, I have at several times had issues with consumer grade PC hardware and ZFS not getting along. The problem is not the disks but the fact I dont have ECC and end to end checking on the datapath. What is happening is that random memory errors and bit flips are written out to disk and when read back again ZFS reports it as a checksum failure: pool: myth state: ONLINE status: One or more
2007 Dec 18
1
zfs error listings
Hello all, looking to get the master list of all the error codes/ messages which I could get back from doing bad things in zfs. I am wrappering the zfs command into python and want to be able to correctly pick up on errors which are returned from certain operations. I did a source code search on opensolaris.org for the text of some of the errors I know about, with no luck. Are these
2007 Nov 18
1
exporting a zfs pool from SPARC to x86
Hi Folks, I''ve got a 4 Tb pool on a 3511 array. The pool was created from, and is mounted on, an old 280r. I''d like to export the pool from the 280r and import it to a shiny new x2200. Will this work? Will the x2200 understand the disk labels made by the 280r? Some tests we''ve done on a smaller, non-production pool suggest not. Suggestions or pointers welcome.
2009 May 21
2
About ZFS compatibility
I have created a pool on external storage with B114. Then I export this pool and import it on another system with B110.But this import will fail and show error: cannot import ''tpool'': pool is formatted using a newer ZFS version. Any big change in ZFS with B114 leads to this compatibility issue? Thanks Zhihui -------------- next part -------------- An HTML attachment was
2008 Apr 29
4
Finding Pool ID
Folks, How can I find out zpool id without using zpool import? zpool list and zpool status does not have option as of Solaris 10U5.. Any back door to grab this property will be helpful. Thank you Ajay