Hey folks, I had some general questions and when reading through the list archives I came across an iSCSI discussion back in February where a couple of individuals were going back and forth about drafting up a "best practices" doc and putting it into a wiki. Did that ever happen? And if so, where is it? Now my questions : We are not using iSCIS yet at work but I see a few places where it would be useful e.g. a number of heavy-use NFS mounts (from my ZFS appliance) that I believe would be slightly more efficient if I converted them to iSCSI. I also want to introduce some virtual machines which I think would work out best if I created iSCSI drives for them back on my Oracle/Sun ZFS appliance. I mentioned iSCSI to the guy whose work I have taken over here so that he can concentrate on his real job, and when I mentioned that we should have a separate switch so that all iSCSI traffic is on it's own switch, he balked and said something like "it is a switched network, it should not matter". But that does not sit right with me - the little bit I've read about iSCSI in the past always stresses that you should have it on its own network. So 2 questions : - how important is it to have it on its own network? - is it OK to use an unmanaged switch (as long as it is Gigabit), or are there some features of a managed switch that are desirable/required with iSCSI? thanks, -Alan -- ?Don't eat anything you've ever seen advertised on TV? - Michael Pollan, author of "In Defense of Food"
On Fri, Dec 9, 2011 at 11:27 AM, Alan McKay <alan.mckay at gmail.com> wrote> > So 2 questions : > - how important is it to have it on its own network? >I would say very important, but probably not required. A separate network segregates the traffic, and you can secure it better. You can also have failover, etc, and potentially use cheaper switches.> - is it OK to use an unmanaged switch (as long as it is Gigabit), or are > there some features of a managed switch that are desirable/required with > iSCSI? >I've setup two iSCSI storage networks. The first was with unmanaged Dell switches... each was only about $200 I think, and it worked great. For the second, I'm not using switches at all and connecting directly from the NIC on the server to the NIC on the disk array. Fortunately, we only have a couple of servers and the IBM disk array comes with an additional card that has 4 NICs on it. You can use managed switches, even ones that are currently supporting your LAN, but I would create some VLANs to separate the traffic. You should also make sure jumbo frames are enabled as well. If you're concerned about maximizing throughput, then a managed switch will have more options to fine-tune this such as link aggregation, but in my cases, I wasn't worried about this because the "default" setup was fast enough for our needs. In the current direct connect setup, the iSCSI network is supporting virtual machines. So even without much tinkering, the speed is good enough. However, everyone's requirements are different. ...adam ____________________________________________ Adam Wead Systems and Digital Collections Librarian Rock and Roll Hall of Fame and Museum
On 12/09/2011 11:27 AM, Alan McKay wrote:> Hey folks, > > I had some general questions and when reading through the list archives I > came across an iSCSI discussion back in February where a couple of > individuals were going back and forth about drafting up a "best practices" > doc and putting it into a wiki. Did that ever happen? And if so, where > is it? > > Now my questions : > We are not using iSCIS yet at work but I see a few places where it would be > useful e.g. a number of heavy-use NFS mounts (from my ZFS appliance) that I > believe would be slightly more efficient if I converted them to iSCSI. I > also want to introduce some virtual machines which I think would work out > best if I created iSCSI drives for them back on my Oracle/Sun ZFS appliance.As you are aware, NFS vs. iSCSI is an apples/oranges comparison. As two or more machines will see the same "block device" using iSCSI, it falls on higher layers to ensure that the storage is accessed safely (ie: using clustered LVM, GFS2, etc). Alternatively, you need to ensure that no two nodes access the same partitions at the same time, which precludes live migration of VMs.> I mentioned iSCSI to the guy whose work I have taken over here so that he > can concentrate on his real job, and when I mentioned that we should have a > separate switch so that all iSCSI traffic is on it's own switch, he balked > and said something like "it is a switched network, it should not matter". > But that does not sit right with me - the little bit I've read about iSCSI > in the past always stresses that you should have it on its own network."Switched network" simply means that data going from machine A to machine B isn't sent to machine C. It doesn't speak to capacity issues.> So 2 questions : > - how important is it to have it on its own network?>From what standpoint? I always have a dedicated network, primarily toensure that if/when the network is saturated, other traffic isn't interrupted. This is particularly important when you have latency-sensitive applications like clustering. There are also arguments for security, but this is only half-true. A VLAN would isolate the network, and encrypting the traffic (with it's performance trade-offs) would protect it from sniffing.> - is it OK to use an unmanaged switch (as long as it is Gigabit), or are > there some features of a managed switch that are desirable/required with > iSCSI?My concern wouldn't be with managed/unmanaged so much as slow/fast. Cheap switches generally have low(er) internal switching bandwidth, so be sure to look up that in the switch's specs and compare it to your expected loads. There are other differences, too, like MAC table sizes and whatnot.> thanks, > -AlanThis may or may not be of use to you, but here is a link to an (in-progress, incomplete) tutorial I am working on. The block diagram just below this link shows how I configure my (VM Cluster) networks. It uses a dedicate "SN" (Storage Network) for DRBD replication traffic, so it's not for iSCSI but the concept is similar. https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Network The network in this configuration is completely redundant (bonding mode=1 (Active/Passive) across two switches). In my case, there are just two managed switches, but there is no real reason that you can't use a pair of unmanaged switches for each subnet (paired for redundancy), so long as those switches are sufficient for your expected load/growth. Cheers -- Digimer E-Mail: digimer at alteeve.com Freenode handle: digimer Papers and Projects: http://alteeve.com Node Assassin: http://nodeassassin.org "omg my singularity battery is dead again. stupid hawking radiation." - epitron
On Fri, Dec 9, 2011 at 10:27 AM, Alan McKay <alan.mckay at gmail.com> wrote:> > Now my questions : > We are not using iSCIS yet at work but I see a few places where it would be > useful e.g. a number of heavy-use NFS mounts (from my ZFS appliance) that I > believe would be slightly more efficient if I converted them to iSCSI. ? I > also want to introduce some virtual machines which I think would work out > best if I created iSCSI drives for them back on my Oracle/Sun ZFS appliance.This doesn't directly apply, but this nfs appliance vendor wants you to think that nfs isn't as bad as you might think: http://www.bluearc.com/bluearc-resources/downloads/analyst-reports/BlueArc-AR-NFSmyths.pdf Overcommitting for de-dup/compression might be harder with iscsi - resizing filesystems would be a lot harder.> I mentioned iSCSI to the guy whose work I have taken over here so that he > can concentrate on his real job, and when I mentioned that we should have a > separate switch so that all iSCSI traffic is on it's own switch, he balked > and said something like "it is a switched network, it should not matter".Is it a single switch? Otherwise you share the bandwidth on the trunk connections.> ?But that does not sit right with me - the little bit I've read about iSCSI > in the past always stresses that you should have it on its own network. > > So 2 questions : > - how important is it to have it on its own network? > - is it OK to use an unmanaged switch (as long as it is Gigabit), or are > there some features of a managed switch that are desirable/required with > iSCSI?I've seen recommendations to use jumbo frames for iscsi - and if you do that, everything on that subnet needs to be configured for them. -- Les Mikesell lesmikesell at gmail.com
The big issue in corporate land would be security. Yes you can do vlans and/or encrypt it, but that is going to add overhead, either management (*people) or CPU, both of which take away from any speed advantages you might get. On Fri, 9 Dec 2011, Alan McKay wrote:> Hey folks, > > I had some general questions and when reading through the list archives I > came across an iSCSI discussion back in February where a couple of > individuals were going back and forth about drafting up a "best practices" > doc and putting it into a wiki. Did that ever happen? And if so, where > is it? > > Now my questions : > We are not using iSCIS yet at work but I see a few places where it would be > useful e.g. a number of heavy-use NFS mounts (from my ZFS appliance) that I > believe would be slightly more efficient if I converted them to iSCSI. I > also want to introduce some virtual machines which I think would work out > best if I created iSCSI drives for them back on my Oracle/Sun ZFS appliance. > > I mentioned iSCSI to the guy whose work I have taken over here so that he > can concentrate on his real job, and when I mentioned that we should have a > separate switch so that all iSCSI traffic is on it's own switch, he balked > and said something like "it is a switched network, it should not matter". > But that does not sit right with me - the little bit I've read about iSCSI > in the past always stresses that you should have it on its own network. > > So 2 questions : > - how important is it to have it on its own network? > - is it OK to use an unmanaged switch (as long as it is Gigabit), or are > there some features of a managed switch that are desirable/required with > iSCSI? > > thanks, > -Alan > > >---------------------------------------------------------------------- Jim Wildman, CISSP, RHCE jim at rossberry.com http://www.rossberry.net "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
On Dec 9, 2011, at 11:27 AM, Alan McKay <alan.mckay at gmail.com> wrote:> So 2 questions : > - how important is it to have it on its own network?The traffic should definitely be segregated for security reasons and to make sure there is minimal crosstalk. Whether to put on a separate switch or VLAN depends on your current network capacity.> - is it OK to use an unmanaged switch (as long as it is Gigabit), or are > there some features of a managed switch that are desirable/required with > iSCSI?Unmanaged should be used only if you plan to restrict it to iSCSI traffic only and then get a high quality one. Managed switches come in very high quality versions that give a flexibility, security and performance. The Dell 6224 or 6248 switches are priced low and provide excellent features for iSCSI. Plus they can have up to 4 10Gbps ports that can be used for uplinks OR 10Gbps endpoints. -Ross
----- Original Message ----- | Hey folks, | | I had some general questions and when reading through the list | archives I | came across an iSCSI discussion back in February where a couple of | individuals were going back and forth about drafting up a "best | practices" | doc and putting it into a wiki. Did that ever happen? And if so, where | is it? | | Now my questions : | We are not using iSCIS yet at work but I see a few places where it | would be | useful e.g. a number of heavy-use NFS mounts (from my ZFS appliance) | that I | believe would be slightly more efficient if I converted them to iSCSI. | I | also want to introduce some virtual machines which I think would work | out | best if I created iSCSI drives for them back on my Oracle/Sun ZFS | appliance. | | I mentioned iSCSI to the guy whose work I have taken over here so that | he | can concentrate on his real job, and when I mentioned that we should | have a | separate switch so that all iSCSI traffic is on it's own switch, he | balked | and said something like "it is a switched network, it should not | matter". | But that does not sit right with me - the little bit I've read about | iSCSI | in the past always stresses that you should have it on its own | network. | | So 2 questions : | - how important is it to have it on its own network? | - is it OK to use an unmanaged switch (as long as it is Gigabit), or | are | there some features of a managed switch that are desirable/required | with | iSCSI? | | thanks, | -Alan It is not imperative to have a separate switch but it certainly helps, especially if your switch is not managed. Managed switches will allow you to create VLANs that can be both jumbo frame and non-jumbo frame on the same device. Jumbo frames is really the important thing when it comes to iSCSI. Having 9000 byte packets verses 1500 byte packets will dramatically increase your performance per interrupt. Most cheaper unmanaged switches cannot do this. Second, if their cheap switches, you'll likely saturate the backplane no matter what. Get good switches for this type of work. -- James A. Peltier Manager, IT Services - Research Computing Group Simon Fraser University - Burnaby Campus Phone : 778-782-6573 Fax : 778-782-3045 E-Mail : jpeltier at sfu.ca Website : http://www.sfu.ca/itservices http://blogs.sfu.ca/people/jpeltier I will do the best I can with the talent I have
On Fri, Dec 9, 2011 at 11:27 AM, Alan McKay <alan.mckay at gmail.com> wrote:> So 2 questions : > - how important is it to have it on its own network? > - is it OK to use an unmanaged switch (as long as it is Gigabit), or are > there some features of a managed switch that are desirable/required with > iSCSI? >At the very least keep the iSCSI traffic on its own vlan. You don't want unnecessary broadcast traffic on your storage network. I would also stay away from unmanaged switches. You want a higher end switch that supports cut through mode. Store and forward will cause higher latency. Ryan
Apparently Analagous Threads
- Parallel/Shared/Distributed Filesystems
- iscsi conn error: Xen related?
- XEN HVMs on LVM over iSCSI - test results, (crashes) and questions
- Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target
- openfiler (was: using CentOS as an iSCSI server?)