search for: murnane

Displaying 17 results from an estimated 17 matches for "murnane".

2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool with a mirrored pair and a (shared) hot spare. We reconfigured disks a while ago and now the controller is c4 instead of c2. The hot spare was originally on c2, and apparently on rebooting it didn''t get found. So, I looked up what the new name for the hot spare was, then added it to the pool with "zpool
2013 Mar 20
11
System started crashing hard after zpool reconfigure and OI upgrade
I have two identical Supermicro boxes with 32GB ram. Hardware details at the end of the message. They were running OI 151.a.5 for months. The zpool configuration was one storage zpool with 3 vdevs of 8 disks in RAIDZ2. The OI installation is absolutely clean. Just next-next-next until done. All I do is configure the network after install. I don''t install or enable any other services.
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now. When we discovered the problem with full filesystems not allowing deletes over NFS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it''s important that they have a simple recourse to fix this (e.g., rm). I played around with this on my OpenSolaris box at home, read around
2009 Apr 11
17
Supermicro SAS/SATA controllers?
The standard controller that has been recommended in the past is the AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several mentions of LSI based controllers on the mailing lists and I''m wondering about them. One obvious difference is that the Marvel contoller is PCI-X and the LSI controllers are PCI-E. Supermicro have several LSI controllers. AOC-USASLP-L8i with the
2008 Jul 17
4
RFE: -t flag for ''zfs destroy''
I would like to request an additional flag for the command line zfs tools. Specifically, I''d like to have a -t flag for "zfs destroy", as shown below. Suppose I have a pool "home" with child filesystem "will", and a snapshot "home/will at yesterday". Then I run the following commands: # zfs destroy -t volume home/will at yesterday zfs: not
2008 Jul 10
49
Supermicro AOC-USAS-L8i
On Wed, Jul 9, 2008 at 1:12 PM, Tim <tim at tcsac.net> wrote: > Perfect. Which means good ol'' supermicro would come through :) WOHOO! > > AOC-USAS-L8i > > http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm Is this card new? I''m not finding it at the usual places like Newegg, etc. It looks like the LSI SAS3081E-R, but probably at 1/2 the
2007 Jun 14
44
Best use of 4 drives?
I''m putting together a NexentaOS (b65)-based server that has 4 500 GB drives on it. Currently it has two, set up as a ZFS mirror. I''m able to boot Nexenta from it, and it seems to work ok. But, as I''ve learned, the mirror is not properly redundant, and so I can''t just have a drive fail (when I pull one, the OS ends up hanging, and even if I replace it, I have to
2007 Sep 26
3
zpool status (advanced listing)?
Under the GUI, there is an "advanced" option which shows vdev capacity, etc. I''m drawing a blank about how to get with the commands... Thanks, David This message posted from opensolaris.org
2008 Oct 06
15
Looking for some hardware answers, maybe someone on this list could help
I posted a thread here... http://forums.opensolaris.com/thread.jspa?threadID=596 I am trying to finish building a system and I kind of need to pick working NIC and onboard SATA chipsets (video is not a big deal - I can get a silent PCIe card for that, I already know one which works great) I need 8 onboard SATA. I would prefer Intel CPU. At least one gigabit port. That''s about it. I
2007 May 31
3
zfs boot error recovery
hi all, i would like to ask some questions regarding best practices for zfs recovery if disk errors occur. currently i have zfs boot (nv62) and the following setup: 2 si3224 controllers (each 4 sata disks) 8 sata disks, same size, same type i have two pools: a) rootpool b) datapool the rootpool is a mirrored pool, where every disk has a slice (the s0, which is 5 % of the whole disk) and this
2007 Jul 05
4
ZFS receive issue running multiple receives and rollbacks
Hi, all, Environment: S10U3 running as VMWare Workstation 6 guest; Fedora 7 is the VMWare host, 1 GB RAM I''m creating a solution in which I need to be able to save off state on one host, then restore it on another. I''m using ZFS snapshots with ZFS receive and it''s all working fine, except for some strange behavior when I perform multiple rollbacks and receives.
2008 Jul 13
9
[RFC] Improved versioned pointer algorithms
Greetings, filesystem algorithm fans. The recent, detailed description of the versioned pointer method for volume versioning is here: http://linux.derkeiler.com/Mailing-Lists/Kernel/2008-07/msg02663.html I apologize humbly for the typo in the first sentence. Today''s revision of the proof of concept code is cleaned up to remove some redundant logic from the exception delete and
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list, as this matter pops up every now and then in posts on this list I just want to clarify that the real performance of RaidZ (in its current implementation) is NOT anything that follows from raidz-style data efficient redundancy or the copy-on-write design used in ZFS. In a M-Way mirrored setup of N disks you get the write performance of the worst disk and a read performance that is
2010 Mar 02
9
Filebench Performance is weird
Greeting All I am using Filebench benchmark in an "Interactive mode" to test ZFS performance with randomread wordload. My Filebench setting & run results are as follwos ------------------------------------------------------------------------------------------ filebench> set $filesize=5g filebench> set $dir=/hdd/fs32k filebench> set $iosize=32k filebench> set
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2007 Jun 19
38
ZFS Scalability/performance
Hello, I''m quite interested in ZFS, like everybody else I suppose, and am about to install FBSD with ZFS. On that note, i have a different first question to start with. I personally am a Linux fanboy, and would love to see/use ZFS on linux. I assume that I can use those ZFS disks later with any os that can work/recognizes ZFS correct? e.g. I can install/setup ZFS in FBSD, and later use
2008 Dec 09
0
forestplot and x axis scale
Hello R users, I would like to create several forestplots with the same X axis, so, if you were to look at the plots lined up all the X axes would be identical (and the different plots could be compared). Here is one version of code I've used: mytk10<-c(0.1, 0.5, 1, 2, 5, 10) pdf(file = "myfile.pdf", pointsize = 7, paper="letter", width=6, height=9)