Hi, We currently use an iSCSI SAN and VMware ESX, but it provides a lot of expensive features we don''t necessarily need. It''s also pretty complicated, with levels of virtualization in the storage that again, we don''t really need or use. I''m looking to design a simpler, expandable and hopefully less expensive storage solution for use with our virtualization platform. I''ve been testing with XCP and I like what I see so far, but I have no idea where to even start with storage. We''d want to run 2 XCP hosts, running perhaps 30-50 windows and linux guests. The storage could be directly attached to the hosts, or on a SAN or a NAS. We''d want a decent level of redundancy in the storage and the ability to run the VMs on either host. Can I get some recommendations on what people are using for this size environment, and where I should start looking to learn about my options. Thanks, Brett Westover _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 21, 2011 at 9:01 AM, Brett Westover <bwestover@pletter.com>wrote:> Hi, > We currently use an iSCSI SAN and VMware ESX, but it provides a lot > of expensive features we don''t necessarily need. It''s also pretty > complicated, with levels of virtualization in the storage that again, we > don''t really need or use. > > I''m looking to design a simpler, expandable and hopefully less expensive > storage solution for use with our virtualization platform. > > I''ve been testing with XCP and I like what I see so far, but I have no > idea where to even start with storage. > > We''d want to run 2 XCP hosts, running perhaps 30-50 windows and linux > guests. The storage could be directly attached to the hosts, or on a SAN > or a NAS. We''d want a decent level of redundancy in the storage and the > ability to run the VMs on either host. >XCP by default has support for iSCSI initiator but not iSCSI target. I just wrote an article on how to add an iSCSI target to XCP 1.1 http://grantmcwilliams.com/tech/virtualization/xcp-howtos/553-creating-an-iscsi-target-on-xen-cloud-platform-11 Maybe not exactly what you''re wanting but if it answers a question then great. All of my hosts in my cloud are XCP machines so even though I have one that''s largely a router, firewall, and host for support VMs, (software repositories, DHCP, DNS etc) it also acts as an iSCSI target for the other hosts.> > Can I get some recommendations on what people are using for this size > environment, and where I should start looking to learn about my options. > > Thanks, > > Brett Westover > >Grant McWilliams http://grantmcwilliams.com/ Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 21, 2011 at 9:01 AM, Brett Westover <bwestover@pletter.com> wrote: Hi, We currently use an iSCSI SAN and VMware ESX, but it provides a lot of expensive features we don't necessarily need. It's also pretty complicated, with levels of virtualization in the storage that again, we don't really need or use. I'm looking to design a simpler, expandable and hopefully less expensive storage solution for use with our virtualization platform. I've been testing with XCP and I like what I see so far, but I have no idea where to even start with storage. We'd want to run 2 XCP hosts, running perhaps 30-50 windows and linux guests. The storage could be directly attached to the hosts, or on a SAN or a NAS. We'd want a decent level of redundancy in the storage and the ability to run the VMs on either host. XCP by default has support for iSCSI initiator but not iSCSI target. I just wrote an article on how to add an iSCSI target to XCP 1.1 http://grantmcwilliams.com/tech/virtualization/xcp-howtos/553-creating-an-iscsi-target-on-xen-cloud-platform-11 Maybe not exactly what you're wanting but if it answers a question then great. All of my hosts in my cloud are XCP machines so even though I have one that's largely a router, firewall, and host for support VMs, (software repositories, DHCP, DNS etc) it also acts as an iSCSI target for the other hosts. Can I get some recommendations on what people are using for this size environment, and where I should start looking to learn about my options. Thanks, Brett Westover Grant McWilliams http://grantmcwilliams.com/ Some people, when confronted with a problem, think "I know, I'll use Windows." Now they have two problems. ------------------------------------------------------------------- Very nicely written up how-to, thank you for that. I don’t think we'd necessarily want to use XCP as our iSCSI target though, because we wouldn't really have resources left for multi-purposing the storage machine. It's nice to know how to do it, if we have a use case for that though. So, using an iSCSI target for shared storage between our 2-3 XCP hosts would be one option. What are the benefits over using and NAS running NFS (which are usually cheaper)? Also, it seems I am reading that you can enable live migration on an XCP pool without an external storage device at all, using local storage in each host. Is that right? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Oct 24, 2011 at 2:50 PM, Brett Westover <bwestover@pletter.com>wrote:> > ------------------------------------------------------------------- > Very nicely written up how-to, thank you for that. I don’t think we''d > necessarily want to use XCP as our iSCSI target though, because we wouldn''t > really have resources left for multi-purposing the storage machine. It''s > nice to know how to do it, if we have a use case for that though. > > So, using an iSCSI target for shared storage between our 2-3 XCP hosts > would be one option. What are the benefits over using and NAS running NFS > (which are usually cheaper)? > > Also, it seems I am reading that you can enable live migration on an XCP > pool without an external storage device at all, using local storage in each > host. Is that right? > >The write speeds of NFS stink. On my hosts using an iSCSI Software target SR I get about 50% more disk throughput on writes over an NFS SR. Reads are very similar though. Even if you want to use a CentOS machine to create an iSCSI SR you can follow my tutorial, just skip the part on adding the CentOS repos. There are advantages to an iSCSI Hardware target such as speed but they''re not a shared SR type. The shared SR types are NFS and software iSCSI targets. Between the two I''d recommend the latter. Setting up the initiator on the XCP host is equally easy for both types and with my tutorial it''s fairly easy to set up an iSCSI Target on another machine. You will want to analyze your disk and network layout for the machine and hosts though. All of the standard hardware performace data applies to an iSCSI target ie. number of disks, RAID levels etc... It would be ideal for your XCP hosts to have dual network cards so that networking data is sent across different cables/switches then the SAN data. In the past I did performance articles on the speed of Xen file based images vs LVM and I''ll be working on some of the same for XCP regarding the various local storage types, iSCSI and NFS. It won''t be done for at least a month though. Grant McWilliams http://grantmcwilliams.com/ Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> You will want to analyze your disk and network layout for the machine and hosts though. All of the standard hardware performace >data applies to an iSCSI target ie. number of disks, RAID levels etc... It would be ideal for your XCP hosts to have dual network >cards so that networking data is sent across different cables/switches then the SAN data.I agree, no matter how you access the storage, the load generated by the applications needs to be supported by the speed and number of disks, and RAID level. We've gone through the pain of having issues with storage that seemed like they could be the iSCSI initiator, the SAN network or buffers on the iSCSI storage device.... when it turned out, we were simply overloading the IO capabilities of the disks. Also, it seems I am reading that you can enable live migration on an XCP pool without an external storage device at all, using local storage in each host. Is that right? How does that work? My understand of shared storage is that one copy of the VM exists in a place accessible to the two hosts, so to do live migration the VM storage doesn't actually move. Can someone point me to some documentation that explains how XenServer and XCP handle storage? It seems to be quite different than VMware. Thanks, Brett Westover _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Oct 26, 2011 at 10:30 AM, Brett Westover <bwestover@pletter.com>wrote:> > You will want to analyze your disk and network layout for the machine and > hosts though. All of the standard hardware performace >data applies to an > iSCSI target ie. number of disks, RAID levels etc... It would be ideal for > your XCP hosts to have dual network >cards so that networking data is sent > across different cables/switches then the SAN data. > > I agree, no matter how you access the storage, the load generated by the > applications needs to be supported by the speed and number of disks, and > RAID level. We''ve gone through the pain of having issues with storage that > seemed like they could be the iSCSI initiator, the SAN network or buffers on > the iSCSI storage device.... when it turned out, we were simply overloading > the IO capabilities of the disks. > > Also, it seems I am reading that you can enable live migration on an XCP > pool without an external storage device at all, using local storage in each > host. Is that right? > > How does that work? > > > My understand of shared storage is that one copy of the VM exists in a > place accessible to the two hosts, so to do live migration the VM storage > doesn''t actually move. > > Can someone point me to some documentation that explains how XenServer and > XCP handle storage? It seems to be quite different than VMware. > > Thanks, > > Brett Westover >I don''t think it''s that different. If you want to do migration your storage needs to be remote. Currently XCP seems to support two types of remote storage that can support migration - NFS and iSCSI software target. Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>Also, it seems I am reading that you can enable live migration on an XCP pool without an external storage device at all, using >local storage in each host. Is that right? >How does that work?>My understand of shared storage is that one copy of the VM exists in a place accessible to the two hosts, so to do live migration >the VM storage doesn't actually move.>Can someone point me to some documentation that explains how XenServer and XCP handle storage? It seems to be quite different than >VMware.>Thanks,>Brett Westover>I don't think it's that different. If you want to do migration your storage needs to be remote. Currently XCP seems to support two >types of remote storage that can support migration - NFS and iSCSI software target.>Grant McWilliamsWell, then that clears that up... unless someone comes along and disagrees :) I thought I heard somewhere you could use DRBD storage on two servers, and use that for your VMs. Since the volumes exist on both servers, you could do migration. But I think that was with a customized server running Xen (not XCP). So, since you previously said that with XenServer you've measured significantly better performance (especially on writes) with iSCSI over NFS, then it seems what I’m looking for is an iSCSI SAN (which is what I have now). I'm using DataCore SanMelody, and we like it just fine, but to add a second server and mirroring is VERY expensive (5 figures just for the licensing). Since we're looking at converting as much of this to open source products as possible, are we looking for OpenFiler? FreeNas? I think you mentioned building a Centos server w/ storage, and then presenting that with built in iSCSI support. Looking for advantages/disadvantages.... We are not against buying an enterprise product like NexentaStor if necessary to get the features we need, we just can't justify anything like what we'd pay now for those (things like Active/Active mirroring as a primary example), from DataCore. Thanks, Brett _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
My opinion on FreeNAS - Using ZFS was the only reason I _started_ to use it, but it wouldn''t work with iSCSI. That may have changed in the latest release, but in the first release of the 8.x version, it was broken. On Wed, Oct 26, 2011 at 3:13 PM, Brett Westover <bwestover@pletter.com> wrote:>>Also, it seems I am reading that you can enable live migration on an XCP pool without an external storage device at all, using >local storage in each host. Is that right? >>How does that work? > > >>My understand of shared storage is that one copy of the VM exists in a place accessible to the two hosts, so to do live migration >the VM storage doesn''t actually move. > >>Can someone point me to some documentation that explains how XenServer and XCP handle storage? It seems to be quite different than >VMware. > >>Thanks, > >>Brett Westover > >>I don''t think it''s that different. If you want to do migration your storage needs to be remote. Currently XCP seems to support two >types of remote storage that can support migration - NFS and iSCSI software target. > >>Grant McWilliams > > Well, then that clears that up... unless someone comes along and disagrees :) > > I thought I heard somewhere you could use DRBD storage on two servers, and use that for your VMs. Since the volumes exist on both servers, you could do migration. But I think that was with a customized server running Xen (not XCP). > > So, since you previously said that with XenServer you''ve measured significantly better performance (especially on writes) with iSCSI over NFS, then it seems what I’m looking for is an iSCSI SAN (which is what I have now). I''m using DataCore SanMelody, and we like it just fine, but to add a second server and mirroring is VERY expensive (5 figures just for the licensing). > > Since we''re looking at converting as much of this to open source products as possible, are we looking for OpenFiler? FreeNas? > I think you mentioned building a Centos server w/ storage, and then presenting that with built in iSCSI support. > > Looking for advantages/disadvantages.... We are not against buying an enterprise product like NexentaStor if necessary to get the features we need, we just can''t justify anything like what we''d pay now for those (things like Active/Active mirroring as a primary example), from DataCore. > > Thanks, > Brett > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Grant,> > You will want to analyze your disk and network layout for the > machine and hosts though. All of the standard hardware performace > >data applies to an iSCSI target ie. number of disks, RAID levels > etc... It would be ideal for your XCP hosts to have dual network > >cards so that networking data is sent across different > cables/switches then the SAN data. > > I agree, no matter how you access the storage, the load generated by > the applications needs to be supported by the speed and number of > disks, and RAID level. We''ve gone through the pain of having issues > with storage that seemed like they could be the iSCSI initiator, the > SAN network or buffers on the iSCSI storage device.... when it > turned out, we were simply overloading the IO capabilities of the disks. > > Also, it seems I am reading that you can enable live migration on an > XCP pool without an external storage device at all, using local > storage in each host. Is that right? > > How does that work? > > > My understand of shared storage is that one copy of the VM exists in > a place accessible to the two hosts, so to do live migration the VM > storage doesn''t actually move. > > Can someone point me to some documentation that explains how > XenServer and XCP handle storage? It seems to be quite different > than VMware. > > Thanks, > > Brett Westover > > > I don''t think it''s that different. If you want to do migration your > storage needs to be remote. Currently XCP seems to support two types of > remote storage that can support migration - NFS and iSCSI software target.I''d rather say that XCP needs "shared" storage rather than "remote" storage. When creating a iscsi SR, xapi does automagically all the stuff behind the scene, setting the underlying PBD and the SR of type "shared". However you can also manually create the PBD on whatever /dev/ (which ought to be shared, be it iscsi/aoe/drbd...) and setting the SR to be "shared". I have tried this setup for XCP with DRBD primary-primary SR, and it works fine with live migration. From the point of view of xapi, the SR is kind of local (/dev/drbd0), but it still can be shared. This setup is not yet in production (not stress-tested yet), but up to now it works flawlessly. Cheers, Denis -- Denis Cardon Tranquil IT Systems 44 bvd des pas enchantés 44230 Saint Sébastien sur Loire tel : +33 (0) 2.40.97.57.57 http://www.tranquil-it-systems.fr _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Oct 27, 2011 at 12:47 AM, Denis Cardon < denis.cardon@tranquil-it-systems.fr> wrote:> Hi Grant, > > > > You will want to analyze your disk and network layout for the >> machine and hosts though. All of the standard hardware performace >> >data applies to an iSCSI target ie. number of disks, RAID levels >> etc... It would be ideal for your XCP hosts to have dual network >> >cards so that networking data is sent across different >> cables/switches then the SAN data. >> >> I agree, no matter how you access the storage, the load generated by >> the applications needs to be supported by the speed and number of >> disks, and RAID level. We''ve gone through the pain of having issues >> with storage that seemed like they could be the iSCSI initiator, the >> SAN network or buffers on the iSCSI storage device.... when it >> turned out, we were simply overloading the IO capabilities of the >> disks. >> >> Also, it seems I am reading that you can enable live migration on an >> XCP pool without an external storage device at all, using local >> storage in each host. Is that right? >> >> How does that work? >> >> >> My understand of shared storage is that one copy of the VM exists in >> a place accessible to the two hosts, so to do live migration the VM >> storage doesn''t actually move. >> >> Can someone point me to some documentation that explains how >> XenServer and XCP handle storage? It seems to be quite different >> than VMware. >> >> Thanks, >> >> Brett Westover >> >> >> I don''t think it''s that different. If you want to do migration your >> storage needs to be remote. Currently XCP seems to support two types of >> remote storage that can support migration - NFS and iSCSI software target. >> > > I''d rather say that XCP needs "shared" storage rather than "remote" > storage. > > When creating a iscsi SR, xapi does automagically all the stuff behind the > scene, setting the underlying PBD and the SR of type "shared". However you > can also manually create the PBD on whatever /dev/ (which ought to be > shared, be it iscsi/aoe/drbd...) and setting the SR to be "shared". >> I have tried this setup for XCP with DRBD primary-primary SR, and it works > fine with live migration. From the point of view of xapi, the SR is kind of > local (/dev/drbd0), but it still can be shared. This setup is not yet in > production (not stress-tested yet), but up to now it works flawlessly. >Agreed. There are a lot of things that need to be done from the xe cli. Remote itself isn''t quite so important although I''d stress that one needs to think about layout first of course.> > Cheers, > > Denis > > > > -- > Denis Cardon > Tranquil IT Systems > 44 bvd des pas enchantés > 44230 Saint Sébastien sur Loire > tel : +33 (0) 2.40.97.57.57 > http://www.tranquil-it-**systems.fr <http://www.tranquil-it-systems.fr> > >Grant McWilliams http://grantmcwilliams.com/ Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>I'd rather say that XCP needs "shared" storage rather than "remote" storage.I agree, it just has to be shared and that makes sense with what I've read. However, since we're just starting out and 'remote shared' seems to be more common and well supported, it seems simplest for us to go that route.>I have tried this setup for XCP with DRBD primary-primary SR, and it works fine with live migration. From the point of view of xapi, the SR is kind of local (/dev/drbd0), but it still can be shared. This setup is not yet in production (not stress-tested yet), but up to now it works flawlessly.This is really good to know about though, in case a design might require it some form. Long term, it seems like breaking the dependency on outside storage would be a good thing for virtualization, to further consolidate hardware.>Agreed. There are a lot of things that need to be done from the xe cli. Remote itself isn't quite so important although I'd stress that one needs to think about layout first of course.And once again, I totally agree with this point. No matter how or where storage is presented, it doesn't change the rules about how many disks and in what RAID level can support the level of I/O that your applications will generate. What are some good options for configuring a redundant SAN using open standards and commodity hardware? If we're going to convert from proprietary virtualization software on expensive hardware, to open source virtualization on commodity hardware (and I think we are), it would make sense to do the same for the storage level. It doesn't eliminate our redundancy requirements though. Brett Westover _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users