similar to: creating ZFS mirror over iSCSI between to DELL MD3000i arrays

Displaying 20 results from an estimated 3000 matches similar to: "creating ZFS mirror over iSCSI between to DELL MD3000i arrays"

2008 Jun 09
0
creating ZFS mirror over iSCSI between to DELL MD3000i arrays
Hi, I''ve looked at ZFS for a while now and i''m wondering if it''s possible on a server create a ZFS mirror between two different iSCSI targets (two MD3000i located in two different server rooms). Or is it any setup that you guys recommend for maximal data protection. Thanks, /Thom -- This messages posted from opensolaris.org
2010 Feb 20
6
l2arc current usage (population size)
Hello, How do you tell how much of your l2arc is populated? I''ve been looking for a while now, can''t seem to find it. Must be easy, as this blog entry shows it over time: http://blogs.sun.com/brendan/entry/l2arc_screenshots And follow up, can you tell how much of each data set is in the arc or l2arc? -- This message posted from opensolaris.org
2011 Oct 24
1
ZFS in front of MD3000i
We''re setting up ZFS in front of an MD3000i (and attached MD1000 expansion trays). The rule of thumb is to let ZFS manage all of the disks, so we wanted to expose each MD3000i spindle via a JBOD mode of some sort. Unfortunately, it doesn''t look like the MD3000i this (though this[1] post seems to reference an Enhanced JBOD mode....), so we decided to create a whole bunch of
2011 Jan 21
1
CentOS and Dell MD3200i / MD3220i iSCSI w/ multipath
We've been wrestling with this for ... rather longer than I'd care to admit. Host / initiator systems are a number of real and virtualized CentOS 5.5 boxes. Storage arrays / targets are Dell MD3220i storage arrays. CentOS is not a Dell-supported configuration, and we've had little helpful advice from Dell. There's been some amount of FUD in that Dell don't seem to know what
2008 Aug 20
3
iscsi and the last mile...
I have a new Dell PowerEdge 2950 running CentOS 5.0 out-of-box and a Dell MD3000i. I am new to iscsi and, with google and included documentation, am having a heck of a time trying to get the RAID volumes I have created on the 3000i to be seen by the OS as usuable drives. I have printed out SMcli and iscsiadm documentation. I have asked on the linux-poweredge at dell.com site, too. Many
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
Hello. I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of weeks ago). Without no obvious reason (at least for me), after an uptime of 1 to 2 days (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it back to 0. ARC is being shrunk to less than 1 GB -- needless to say that performance is terrible. There is not much load on this system. Memory
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello. We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt scsi buses, skge GigE network) as a NFS backend with ZFS for distribution of free software like Debian (cdimage.debian.org, ftp.se.debian.org) and have run into some performance issues. We are running SX snv_48 and have run with a raidz2 with 7x300G for a while now, just added another 7x300G raidz2 today but
2008 Jun 07
4
Mixing RAID levels in a pool
Hi, I had a plan to set up a zfs pool with different raid levels but I ran into an issue based on some testing I''ve done in a VM. I have 3x 750 GB hard drives and 2x 320 GB hard drives available, and I want to set up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to the same pool. I tested detaching a drive and it seems to seriously mess up the entire pool and I
2010 Apr 22
2
iSCSI / GFS shared web server file system
We currently have a MD3000i with an iSCSI LUN shared out to our apache web server. We are going to add another apache web server into the mix using LVS to load balance, however, I am curious how well iSCSI handles file locking and data integrity. I have the iSCSI partition formatted as ext3. Is my setup totally flawed and will ext3 not allow for data integrity with multiple apache hosts
2009 Apr 08
2
ZFS data loss
Hi, I have lost a ZFS volume and I am hoping to get some help to recover the information ( a couple of months worth of work :( ). I have been using ZFS for more than 6 months on this project. Yesterday I ran a "zvol status" command, the system froze and rebooted. When it came back the discs where not available. See bellow the output of " zpool status", "format"
2007 Aug 27
1
Nested ZFS sharenfs exports are empty on automount clients
Hello I''ve got nested ZFS filesystems exported via NFS. They are mounted on the clients using automount (from a NIS map). But: only the root exported filesystem shows any contents on the clients. Any sub-directories it has are fine, but any sub-filesystems are empty. ie. NIS map auto.stuff contains "stuff server:/stuff/images" server% zfs get sharenfs stuff/images
2008 Jan 24
5
Mirrrors with Uneven Drives!?
I didn''t think this was possible, but apparently it is. How does this work? How do you mirror data on a 3 disk set? This message posted from opensolaris.org
2010 Aug 03
2
When is the L2ARC refreshed if on a separate drive?
I''m running a mirrored pair of 2 TB SATA drives as my data storage drives on my home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to the single mirrored pair. I''m running B134, with ZFS pool version 22, with dedup enabled. If I understand correctly, the dedup table should be in
2008 Jan 10
2
NCQ
fun example that shows NCQ lowers wait and %w, but doesn''t have much impact on final speed. [scrubbing, devs reordered for clarity] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 sd6 445.7 0.0
2010 Apr 02
6
L2ARC & Workingset Size
Hi all I ran a workload that reads & writes within 10 files each file is 256M, ie, (10 * 256M = 2.5GB total Dataset Size). I have set the ARC max size to 1 GB on etc/system file In the worse case, let us assume that the whole dataset is hot, meaning my workingset size= 2.5GB My SSD flash size = 8GB and being used for L2ARC No slog is used in the pool My File system record size = 8K ,
2008 Feb 21
3
raidz2 resilience on 3 disks
Hello, 1) If i create a raidz2 pool on some disks, start to use it, then the disks'' controllers change. What will happen to my zpool? Will it be lost or is there some disk tagging which allows zfs to recognise the disks? 2) if i create a raidz2 on 3 HDs, do i have any resilience? If any one of those drives fails, do i loose everything? I''ve got one such pool and
2009 Feb 05
1
nfs sharing of zfs sub filesystems - can it be done?
I''m new to zfs and opensolaris and so am not sure of the correct terminology for this question. I have a test machine running opensolaris 2008.11 (which I have been very impressed with so far). It has 1 disc for boot and 3 as a zpool (called tank, as per the majority of the examples ;)) This machine is a test "SAN" for use by a second test machine running vmware ESX (the free
2010 Dec 09
3
How many files & directories in a ZFS filesystem?
Looking for a little help, please. A contact from Oracle (Sun) suggested I pose the question to this email. We''re using ZFS on Solaris 10 in an application where there are so many directory-subdirectory layers, and a lot of small files (~1-2Kb) that we ran out of inodes (over 30 million!). So, the zfs question is, how can we see how many files & directories have been created in
2010 Feb 18
3
improve meta data performance
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops of which about 90% are meta data. In hind sight it would have been significantly better to use a mirrored configuration but we opted for 4 x (9+2) raidz2 at the time. We can not take the downtime necessary to change the zpool configuration. We need to improve the meta data performance with little to no money. Does anyone
2009 Feb 17
5
scrub on snv-b107
scrub completed after 1h9m with 0 errors on Tue Feb 17 12:09:31 2009 This is about twice as slow as the same srub on a solaris 10 box with a mirrored zfs root pool. Has scrub become that much slower? And if so, why? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS sxce snv107 ++ + All that''s really worth doing is what we do for others (Lewis Carrol)