similar to: can I configure a fc-san initiator for a storage array?

Displaying 20 results from an estimated 120 matches similar to: "can I configure a fc-san initiator for a storage array?"

2010 Oct 11
0
Ubuntu iSCSI install to COMSTAR zfs volume Howto
I apologize if this has been covered before. I have not seen a blow-by-blow installation guide for Ubuntu onto an iSCSI target. The install guides I have seen assume that you can make a target visible to all, which is a problem if you want multiple iSCSI installations on the same COMSTAR target. During install Ubuntu generates three random initiators and you have to deal with them to get things
2012 Sep 28
2
iscsi confusion
I am confused, because I would have expected a 1-to-1 mapping, if you create an iscsi target on some system, you would have to specify which LUN it connects to. But that is not the case... I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm. I read some online examples, where you first "sbdadm create-lu" which gives you a GUID for a specific device in the system, and then
2007 Sep 17
1
Strange behavior zfs and soalris cluster
Hi All, Two and three-node clusters with SC3.2 and S10u3 (120011-14). If a node is rebooted when using SCSI3-PGR the node is not able to take the zpool by HAStoragePlus due to reservation conflict. SCSI2-PGRE is okay. Using the same SAN-LUN:s in a metaset (SVM) and HAStoragePlus works okay with PGR and PGRE. (both SMI and EFI-labled disks) If using scshutdown and restart all nodes then it will
2009 Aug 07
1
add-view for the zfs snapshot
I frist create lun by "stmfadm create-lu ", and add-view , so the initiator can see the created lun. Now I use "zfs snapshot" to create snapshot for the created lun. What can I do to make the snapshot is accessed by the Initiator? Thanks. -- This message posted from opensolaris.org
2009 May 20
1
how to reliably determine what is locking up my zvol?
-bash-3.2# zpool export exchbk cannot remove device links for ''exchbk/exchbk-2'': dataset is busy this is a zvol used for a comstar iscsi backend: -bash-3.2# stmfadm list-lu -v LU Name: 600144F0EAC0090000004A0A4F410001 Operational Status: Offline Provider Name : sbd Alias : /dev/zvol/rdsk/exchbk/exchbk-1 View Entry Count : 1 LU Name:
2006 Sep 21
1
Dtrace script compilation error.
Hi All, One of the customer is seeing the following error messages. He is using a S10U1 system with all latest patches. Dtrace script is attached. I am not seeing these error messages on S10U2 system. What could be the problem? Thanks, Gangadhar. ------------------------------------------------------------------------ rroberto at njengsunu60-2:~$ dtrace -AFs /lib/svc/share/kcfd.d dtrace:
2010 Nov 30
0
Resizing ZFS block devices and sbdadm
sbdadm can be used with a regular ZFS file or a ZFS block device. Is there an advatage to using a ZFS block device and exporting it to comstar via sbdadm as opposed to using a file and exporting it? (e.g. performance or manageability?) Also- let''s say you have a 5G block device called pool/test You can resize it by doing: zfs set volsize=10G pool/test However if the device was already
2004 Oct 05
1
compilation problem R2.0.0 Linux SuSE8.2 [incl. output] (PR#7264)
--=-=-= Sorry, forgot to attach the file... --=-=-= Content-Type: application/zip Content-Disposition: attachment; filename=pdcompilelog.zip Content-Transfer-Encoding: base64 UEsDBBQAAAAIAImkRTGksC5Kaw8AACBOAAATABUAcGQuY29uZmlndXJlLm91dHB1dFVUCQADoeli QY3pYkFVeAQA9AFkAOVcbY/bNhL+nl8hFAc0vcR2Nt1kgwJ3QLLrtm73JdjNHRb3xaApymYskapE
2008 Dec 05
0
resync onnv_105 partial for 6713916
Author: Darren Moffat <Darren.Moffat at Sun.COM> Repository: /hg/zfs-crypto/gate Latest revision: 957d30a3607ed9f3cbe490da5894d1e1b2104033 Total changesets: 28 Log message: resync onnv_105 partial for 6713916 Files: usr/src/Makefile.lint usr/src/Targetdirs usr/src/cmd/Makefile usr/src/cmd/Makefile.cmd usr/src/cmd/acctadm/Makefile usr/src/cmd/acctadm/acctadm.xcl
2011 May 10
5
Modify stmf_sbd_lu properties
Is it possible to modify the GUID associated with a ZFS volume imported into STMF? To clarify- I have a ZFS volume I have imported into STMF and export via iscsi. I have a number of snapshots of this volume. I need to temporarily go back to an older snapshot without removing all the more recent ones. I can delete the current sbd LU, clone the snapshot I want to test, and then bring that back in
2009 Jan 13
1
consolidating the NUT documentation on permissions, hotplug and udev
Arnaud et al, I have been meaning to collect some of the documentation updates for permission-related errors, and I was wondering if you would mind if we moved the scripts/udev/README and scripts/hotplug/README files out of scripts/ and into the docs/ directory (probably docs/permissions.txt). We could also cover the *BSD /dev/usb* permission issues there, as well. Any thoughts on this? -- -
2012 May 07
14
Has anyone used a Dell with a PERC H310?
I''m trying to configure a DELL R720 (not a pleasant experience) which has an H710p card fitted. The H710p definitely doesn''t support JBOD, but the H310 looks like it might (the data sheet mentions non-RAID). Has anyone used one with ZFS? Thanks, -- Ian.
2013 Jan 07
5
mpt_sas multipath problem?
Greetings, We''re trying out a new JBOD here. Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. The OS is oi151a7, running on an existing server with a 54TB pool of internal drives. I believe the server hardware is not relevant to the JBOD issue, although the internal drives do appear to the OS with multipath device names (despite the fact
2006 Dec 02
2
Initiator for iscsi?
Anyone running centos with an iscsi filesystem mounted? If so: What version of centos? Which iscsi package? What filesystem are you using on the mount? Does it perform like you'd expect? Thanks, peter
2007 Jun 14
0
CESA-2007:0497 Moderate CentOS 5 i386 iscsi-initiator-utils Update
CentOS Errata and Security Advisory 2007:0497 Moderate Upstream details at : https://rhn.redhat.com/errata/RHSA-2007-0497.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) i386: ee92f792a8d56ea4db6dde9aad05c452 iscsi-initiator-utils-6.2.0.742-0.6.el5.i386.rpm Source: 9b7a480e0161eb038fef4da33eb3fd15
2007 Jun 14
0
CESA-2007:0497 Moderate CentOS 5 x86_64 iscsi-initiator-utils Update
CentOS Errata and Security Advisory 2007:0497 Moderate Upstream details at : https://rhn.redhat.com/errata/RHSA-2007-0497.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) x86_64: ff04b4bfd78d2d38c8bf88f712f2997f iscsi-initiator-utils-6.2.0.742-0.6.el5.x86_64.rpm Source: 9b7a480e0161eb038fef4da33eb3fd15
2007 Apr 05
0
(open iscsi) initiator crashes
I think I have found the reason for this: The setup runs just fine until I set the xen dom0 to only use one of the four CPUs in my machine (actually 2 HT CPUs). So with (dom0-cpus 0) in /etc/xen/xend-config.sxp this works. The while-loop actually ran fine for 2 days straight. With (dom0-cpus 1) it crashes as described within a few minutes. I will cc this to the xen-list. Full thread here:
2013 Mar 09
0
CEBA-2013:0438 CentOS 6 iscsi-initiator-utils Update
CentOS Errata and Bugfix Advisory 2013:0438 Upstream details at : https://rhn.redhat.com/errata/RHBA-2013-0438.html The following updated files have been uploaded and are currently syncing to the mirrors: ( sha256sum Filename ) i386: 8127ade8d17da59c23de745f65003917fcada41924f969c43969804ae4896e5c iscsi-initiator-utils-6.2.0.873-2.el6.i686.rpm
2010 Jul 12
0
Zfs pool / iscsi lun with windows initiator.
Hi friends, i have a problem. I have a file server which initiates large volumes with iscsi initiator. Problem is, zfs side it shows non aviable space, but i am %100 sure there is at least, 5 TB space. Problem is, because zfs pool shows as 0 aviable all iscsi connection got lost and all sharing setup is gone and need restart to fix. all time till today i keep delete snapshots and make it alive
2010 Jun 08
0
ovirt-config-iscsi : semi-random initiator file
Hi, We are experiencing problems with the iscsi configuration. The /etc/iscsi/initiatorname.iscsi is randomly generated during the node build (%{_libexecdir}/ovirt-config-iscsi in ovirt-node.spec), that makes all nodes have the same "random" initiator file. The iscsi server doesn't like that very much, it believes that come from the same node (-> sessions killed,