Displaying 20 results from an estimated 10000 matches similar to: "creating ZFS mirror over iSCSI between to DELL MD3000i arrays"
2008 Jun 09
2
creating ZFS mirror over iSCSI between to DELL MD3000i arrays
Hi,
I''ve looked at ZFS for a while now and i''m wondering if it''s possible on a server create a ZFS mirror between two different iSCSI targets (two MD3000i located in two different server rooms).
Or is it any setup that you guys recommend for maximal data protection.
Thanks,
/Thom
This message posted from opensolaris.org
2011 Oct 24
1
ZFS in front of MD3000i
We''re setting up ZFS in front of an MD3000i (and attached MD1000
expansion trays).
The rule of thumb is to let ZFS manage all of the disks, so we wanted
to expose each MD3000i spindle via a JBOD mode of some sort.
Unfortunately, it doesn''t look like the MD3000i this (though this[1]
post seems to reference an Enhanced JBOD mode....), so we decided to
create a whole bunch of
2011 Jan 21
1
CentOS and Dell MD3200i / MD3220i iSCSI w/ multipath
We've been wrestling with this for ... rather longer than I'd care to admit.
Host / initiator systems are a number of real and virtualized CentOS 5.5
boxes. Storage arrays / targets are Dell MD3220i storage arrays.
CentOS is not a Dell-supported configuration, and we've had little helpful
advice from Dell. There's been some amount of FUD in that Dell don't seem
to know what
2008 Aug 20
3
iscsi and the last mile...
I have a new Dell PowerEdge 2950 running CentOS 5.0 out-of-box and a Dell
MD3000i. I am new to iscsi and, with google and included documentation,
am having a heck of a time trying to get the RAID volumes I have created
on the 3000i to be seen by the OS as usuable drives. I have printed out
SMcli and iscsiadm documentation.
I have asked on the linux-poweredge at dell.com site, too.
Many
2010 Apr 22
2
iSCSI / GFS shared web server file system
We currently have a MD3000i with an iSCSI LUN shared out to our apache web
server. We are going to add another apache web server into the mix using
LVS to load balance, however, I am curious how well iSCSI handles file
locking and data integrity. I have the iSCSI partition formatted as ext3.
Is my setup totally flawed and will ext3 not allow for data integrity with
multiple apache hosts
2009 Nov 06
0
iscsi connection drop, comes back in seconds, then deadlock in cluster
Greetings ocfs2 folks,
A client is experiencing some random deadlock issues within a cluster,
wondering if anyone can point us in the right direction. The iSCSI
connection seemed to have dropped on one node briefly, ultimately
several hours later landing us in a complete deadlock scenario where
multiple nodes (Node 7 and Node 8) had to be panic'd (by hand - they
didn't ever panic on
2011 Apr 21
1
iscsi multipath fails
Hi all,
I have a Dell server running cent 5.6, new install, connecting to an IBM
DS3500. I have configured iscsi connections using iscsid and can log into
the targets on the IBM. I can also mount the LUNs when accessing them from
their active controller path. When I throw multipath into the mix, it fails
completely. Multipath is working, when I run multipath -ll it shows me the
correct active
2011 Jan 22
2
CentOS and Dell MD3200i / MD3220i iSCSI w/ multipath -- slightly OT
Greetings,
On 1/22/11, Edward Morbius <dredmorbius at gmail.com> wrote:
> CentOS is not a Dell-supported configuration, and we've had little helpful
> advice from Dell. There's been some amount of FUD in that Dell don't seem
> to know what Dell's own software installation (the md3
>
> Dell doesn't seem to have much OS experience generally.
>
+1
It is
2007 Oct 05
2
zfs + iscsi target + vmware esx server
I''m posting here as this seems to be a zfs issue. We also have an open ticket with Sun support and I''ve heard another large sun customer also is reporting this as an issue.
Basic Problem: Create a zfs file system and set shareiscsi to on. On a vmware esx server discover that iscsi target. It shows up as 249 luns. When attempting to then add the storage esx server eventually
2009 Dec 05
4
Using iSCSI on ZFS with non-native FS - How to backup.
Hi there.
I''m looking at moving my home server to ZFS and adding a second for backup purposes.
In the process of researching ZFS I noticed iSCSI.
I''m thinking of creating a zvol, share it with iSCSI and use it with my Mac(s).
In this scenario, the fs would obviously have to be HFS+.
Now, my question is:
How would I go about replicating this non-native FS to the backup server?
2010 Aug 26
0
zfs/iSCSI: 0000 = SNS Error Type: Current Error (0x70)
Hi,
I''m trying to track down an error with a 64bit x86 OpenSolaris 2009.06 ZFS shared via iSCSI and an Ubuntu 10.04 client. The client can successfully log in, but no device node appears. I captured a session with wireshark. When the client attempts a "SCSI: Inquiry LUN: 0x00", OpenSolaris sends a "SCSI Response (Check Condition) LUN:0x00" that contains the
2009 Jan 08
0
[storage-discuss] ZFS iscsi snapshot - VSS compatible?
I don''t know if VSS has this capability, but essentially if it can temporarily quiesce a device like a data base does for "warm standby" then a snapshot should work. This would be a very simple Windows side script/batch:
1) Q-Disk
2) Remote trigger snapshot
3) Un Q-Disk
I have no idea where to even begin researching VSS unfortunately...
James
(Sent from my mobile)
2008 Aug 04
3
DomU with ZFS root on iSCSI - any tips?
Hi Folks,
Just wondering if anyone had any tips for trying to install a NV 94 DomU
with ZFS root to an iSCSI Target?
The iSCSI Target happens to be a NV 94 system with ZVOLs exported as the
Targets, but I wouldn''t think that would matter.
I tried this last week and the install seemed to complete fine, but when
the DomU attempted to reboot after install, I received a message to the
2007 Feb 24
1
zfs received vol not appearing on iscsi target list
Just installed Nexenta and I''ve been playing around with zfs.
root at hzsilo:/tank# uname -a
SunOS hzsilo 5.11 NexentaOS_20070105 i86pc i386 i86pc Solaris
root at hzsilo:/tank# zfs list
NAME USED AVAIL REFER MOUNTPOINT
home 89.5K 219G 32K /export/home
tank 330K 1.78T 51.9K /tank
tank/iscsi_luns 147K
2009 Aug 06
4
Can I setting ''zil_disable'' to increase ZFS/iscsi performance ?
Is there any way to increase the ZFS performance?
--
This message posted from opensolaris.org
2010 Jul 12
0
Zfs pool / iscsi lun with windows initiator.
Hi friends,
i have a problem. I have a file server which initiates large volumes with iscsi initiator. Problem is, zfs side it shows non aviable space, but i am %100 sure there is at least, 5 TB space. Problem is, because zfs pool shows as 0 aviable all iscsi connection got lost and all sharing setup is gone and need restart to fix. all time till today i keep delete snapshots and make it alive
2007 Jul 04
2
ZFS, iSCSI + Mac OS X Tiger (globalSAN iSCSI)
I have set up an iSCSI ZFS target that seems to connect properly from the
Microsoft Windows initiator in that I can see the volume in MMC Disk
Management.
When I shift over to Mac OS X Tiger with globalSAN iSCSI, I am able to set
up the Targets with the target name shown by `iscsitadm list target` and
when I actually connect or "Log On" I see that one connection exists on the
Solaris
2009 Jan 02
3
ZFS iSCSI (For VirtualBox target) and SMB
Hey all,
I''m setting up a ZFS based fileserver to use both as a shared network drive and separately to have an iSCSI target to be used as the "Hard disk" of a windows based VM runninf on another machine.
I''ve built the machine, installed the OS, created the RAIDZ pool and now have a couple of questions (I''m pretty much new to Solaris by the way but have been
2011 Sep 14
3
Is there any implementation of VSS for a ZFS iSCSI snapshot on Solaris?
I am using a Solaris + ZFS environment to export a iSCSI
block layer device and use the snapshot facility to take a snapshot of the ZFS
volume. Is there an existing Volume Shadow Copy (VSS) implementation on
Windows for this environment?
Thanks
S Joshi
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2010 Jul 16
1
Making a zvol unavailable to iSCSI trips up ZFS
I''ve been experimenting with a two system setup in snv_134 where each
system exports a zvol via COMSTAR iSCSI. One system imports both its
own zvol and the one from the other system and puts them together in
a ZFS mirror.
I manually faulted the zvol on one system by physically removing some
drives. What I expect to happen is that ZFS will fault the zvol pool
and the iSCSI stack will