Hi Everyone, I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise). What is the best way to go about this? Should I: a) Split the RAID10 array into 2 partition on the storage server, and export 1 partition to each xen host, then let the Xen host manage LVM? or b) Do all LVM stuff on the storage server and export each LVM logical volume to the correct Xen hosts via iSCSI? Since each host could have around 100 VMs on it, that''s a lot of iSCSI exporting! Any help is appreciated Thanks Jonathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Or should I export the whole RAID10 array via a single iSCSI LUN, and make the 2 server connect to the same iSCSI target (is this possible??). This would allow migration I guess. Any help on best practises is appreciated Thanks ________________________________ From: xen-users-bounces@lists.xensource.com on behalf of Jonathan Tripathy Sent: Mon 14/06/2010 14:29 To: Xen-users@lists.xensource.com Subject: [Xen-users] iSCSI and LVM Hi Everyone, I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise). What is the best way to go about this? Should I: a) Split the RAID10 array into 2 partition on the storage server, and export 1 partition to each xen host, then let the Xen host manage LVM? or b) Do all LVM stuff on the storage server and export each LVM logical volume to the correct Xen hosts via iSCSI? Since each host could have around 100 VMs on it, that''s a lot of iSCSI exporting! Any help is appreciated Thanks Jonathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
If you can run cluster services on your dom0 hosts (easy with CentOS/RHEL, don''t know about others), you can also use CLVM. That allows both hosts to see all logical volumes. You can perform volume operations (create/exend/remove/etc) from either node, and use Xen migration. From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: Monday, June 14, 2010 9:43 AM To: Xen-users@lists.xensource.com Subject: RE: [Xen-users] iSCSI and LVM Or should I export the whole RAID10 array via a single iSCSI LUN, and make the 2 server connect to the same iSCSI target (is this possible??). This would allow migration I guess. Any help on best practises is appreciated Thanks ________________________________ From: xen-users-bounces@lists.xensource.com on behalf of Jonathan Tripathy Sent: Mon 14/06/2010 14:29 To: Xen-users@lists.xensource.com Subject: [Xen-users] iSCSI and LVM Hi Everyone, I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise). What is the best way to go about this? Should I: a) Split the RAID10 array into 2 partition on the storage server, and export 1 partition to each xen host, then let the Xen host manage LVM? or b) Do all LVM stuff on the storage server and export each LVM logical volume to the correct Xen hosts via iSCSI? Since each host could have around 100 VMs on it, that''s a lot of iSCSI exporting! Any help is appreciated Thanks Jonathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''m using Ubuntu, so I''m not sure if CLVM is the best option.. Does anyone have any experience with "shared storage" using iSCSI with Debian/Ubuntu Dom0s? Thanks ________________________________ From: Jeff Sturm [mailto:jeff.sturm@eprize.com] Sent: Mon 14/06/2010 15:11 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] iSCSI and LVM If you can run cluster services on your dom0 hosts (easy with CentOS/RHEL, don''t know about others), you can also use CLVM. That allows both hosts to see all logical volumes. You can perform volume operations (create/exend/remove/etc) from either node, and use Xen migration. From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: Monday, June 14, 2010 9:43 AM To: Xen-users@lists.xensource.com Subject: RE: [Xen-users] iSCSI and LVM Or should I export the whole RAID10 array via a single iSCSI LUN, and make the 2 server connect to the same iSCSI target (is this possible??). This would allow migration I guess. Any help on best practises is appreciated Thanks ________________________________ From: xen-users-bounces@lists.xensource.com on behalf of Jonathan Tripathy Sent: Mon 14/06/2010 14:29 To: Xen-users@lists.xensource.com Subject: [Xen-users] iSCSI and LVM Hi Everyone, I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise). What is the best way to go about this? Should I: a) Split the RAID10 array into 2 partition on the storage server, and export 1 partition to each xen host, then let the Xen host manage LVM? or b) Do all LVM stuff on the storage server and export each LVM logical volume to the correct Xen hosts via iSCSI? Since each host could have around 100 VMs on it, that''s a lot of iSCSI exporting! Any help is appreciated Thanks Jonathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Le lundi 14 juin 2010 à 14:29 +0100, Jonathan Tripathy a écrit :> Hi Everyone, > > I am going to get a storage server which will be connected to my Xen > hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The > storage server will have a RAID10 array, and 2 Xen hosts will connect > to this (Each will have a 50% share of the RAID10 array, space wise). > > What is the best way to go about this? Should I: > > a) Split the RAID10 array into 2 partition on the storage server, and > export 1 partition to each xen host, then let the Xen host manage LVM? > > or > > b) Do all LVM stuff on the storage server and export each LVM logical > volume to the correct Xen hosts via iSCSI? Since each host could have > around 100 VMs on it, that''s a lot of iSCSI exporting! >Hello, If you export two "big luns" and do the LVM stuff on each Xens server you will not be able to "switch" a VM from one server to the other (with xm save/restore for example). With that setup a VM can only be started on a specific server. Also you will have two LVM sets to manage. If you do all the storage management on your SAN migrate a VM from one server to the other will be easy and even done "live". Also the storage management will be "unified". The only problem could be the maximum number of iSCSI Luns on yours SAN, but it often is greater than 127 distinct units. Regards JPP _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: JP P [mailto:storm66@club-internet.fr] Sent: Mon 14/06/2010 16:09 To: Jonathan Tripathy Cc: Xen-users@lists.xensource.com Subject: Re: [Xen-users] iSCSI and LVM Le lundi 14 juin 2010 à 14:29 +0100, Jonathan Tripathy a écrit :> Hi Everyone, > > I am going to get a storage server which will be connected to my Xen > hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The > storage server will have a RAID10 array, and 2 Xen hosts will connect > to this (Each will have a 50% share of the RAID10 array, space wise). > > What is the best way to go about this? Should I: > > a) Split the RAID10 array into 2 partition on the storage server, and > export 1 partition to each xen host, then let the Xen host manage LVM? > > or > > b) Do all LVM stuff on the storage server and export each LVM logical > volume to the correct Xen hosts via iSCSI? Since each host could have > around 100 VMs on it, that''s a lot of iSCSI exporting! >Hello, If you export two "big luns" and do the LVM stuff on each Xens server you will not be able to "switch" a VM from one server to the other (with xm save/restore for example). With that setup a VM can only be started on a specific server. Also you will have two LVM sets to manage. If you do all the storage management on your SAN migrate a VM from one server to the other will be easy and even done "live". Also the storage management will be "unified". The only problem could be the maximum number of iSCSI Luns on yours SAN, but it often is greater than 127 distinct units. Regards JPP ---------------------------------------------------------------------------------- Ok thanks for the advice. So I think I shall go down the root of doing all LVM stuff on the SAN. However, I think that I will need more than 127 virtual machines, so having an iSCSI target per VM is probably not an option. Is there any other solution for Ubuntu/Debian? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi>> I am going to get a storage server which will be connected to my Xen >> hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The >> storage server will have a RAID10 array, and 2 Xen hosts will connect >> to this (Each will have a 50% share of the RAID10 array, space wise). >> >> What is the best way to go about this? Should I: >> >> a) Split the RAID10 array into 2 partition on the storage server, and >> export 1 partition to each xen host, then let the Xen host manage LVM? >> >> or >> >> b) Do all LVM stuff on the storage server and export each LVM logical >> volume to the correct Xen hosts via iSCSI? Since each host could have >> around 100 VMs on it, that''s a lot of iSCSI exporting! >> > If you export two "big luns" and do the LVM stuff on each Xens server > you will not be able to "switch" a VM from one server to the other (with > xm save/restore for example). With that setup a VM can only be started > on a specific server. Also you will have two LVM sets to manage. > > If you do all the storage management on your SAN migrate a VM from one > server to the other will be easy and even done "live". Also the storage > management will be "unified". The only problem could be the maximum > number of iSCSI Luns on yours SAN, but it often is greater than 127 > distinct units. > > Ok thanks for the advice. So I think I shall go down the root of doing all > LVM stuff on the SAN. > > However, I think that I will need more than 127 virtual machines, so having > an iSCSI target per VM is probably not an option. Is there any other > solution for Ubuntu/Debian?Would it be an option to have one iSCSI target, used by multiple clients where only one clients accesses a directory at a time?? that way it would be way easier to mgrate a vm from one host to another. HTH Regards, Serge Fonville -- http://www.sergefonville.nl Convince Google!! They need to support Adsense over SSL https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: xen-users-bounces@lists.xensource.com on behalf of Serge Fonville Sent: Mon 14/06/2010 16:53 To: xen-users@lists.xensource.com Subject: Re: [Xen-users] iSCSI and LVM Hi>> I am going to get a storage server which will be connected to my Xen >> hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The >> storage server will have a RAID10 array, and 2 Xen hosts will connect >> to this (Each will have a 50% share of the RAID10 array, space wise). >> >> What is the best way to go about this? Should I: >> >> a) Split the RAID10 array into 2 partition on the storage server, and >> export 1 partition to each xen host, then let the Xen host manage LVM? >> >> or >> >> b) Do all LVM stuff on the storage server and export each LVM logical >> volume to the correct Xen hosts via iSCSI? Since each host could have >> around 100 VMs on it, that''s a lot of iSCSI exporting! >> > If you export two "big luns" and do the LVM stuff on each Xens server > you will not be able to "switch" a VM from one server to the other (with > xm save/restore for example). With that setup a VM can only be started > on a specific server. Also you will have two LVM sets to manage. > > If you do all the storage management on your SAN migrate a VM from one > server to the other will be easy and even done "live". Also the storage > management will be "unified". The only problem could be the maximum > number of iSCSI Luns on yours SAN, but it often is greater than 127 > distinct units. > > Ok thanks for the advice. So I think I shall go down the root of doing all > LVM stuff on the SAN. > > However, I think that I will need more than 127 virtual machines, so having > an iSCSI target per VM is probably not an option. Is there any other > solution for Ubuntu/Debian?Would it be an option to have one iSCSI target, used by multiple clients where only one clients accesses a directory at a time?? that way it would be way easier to mgrate a vm from one host to another. HTH Regards, Serge Fonville -- http://www.sergefonville.nl Convince Google!! They need to support Adsense over SSL https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users --------------------------------------------------------------------------------------------------------------------------------------------------------- So you''re saying that I could just have one bug LUN, which both xen hosts could connect to, and as long as each host didn''t try to run the same DomU as the same time, I''d be ok? Is this safe and done in industry? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> Would it be an option to have one iSCSI target, used by multiple >> clients where only one clients accesses a directory at a time?? >> that way it would be way easier to mgrate a vm from one host to another. > > So you''re saying that I could just have one bug LUN, which both xen hosts > could connect to, and as long as each host didn''t try to run the same DomU > as the same time, I''d be ok? Is this safe and done in industry?I have seen it deployed in production. But you should definitely test and google a lot, since it is a very complex and sensitive setup, especially f not done properly. There is a lot more to look into than with a normal iSCSI setup. I googled for: iscsi single target multiple clients and found the results very interesting. Also, why exactly have you chosen to use iSCSI? Regards, Serge Fonville -- http://www.sergefonville.nl Convince Google!! They need to support Adsense over SSL https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Also, why exactly have you chosen to use iSCSI? ----------------------------------------------------------------------------------- There is nothing else that I can afford. What other options are there? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Well, it would be a lot easier (and just as cheap) to use NFS. The only downside to NFS might be that it is not accessible from Windows. But it is much simpler to setup. eaisier to maintain, faster to troubleshoot. test friendlier and a lot more used. So basically, what benefits does iSCSI offer you, that NFS doesn''t? HTH, Regards, Serge Fonville On Mon, Jun 14, 2010 at 6:08 PM, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote:> > > Also, why exactly have you chosen to use iSCSI? > > ----------------------------------------------------------------------------------- > > There is nothing else that I can afford. What other options are there?-- http://www.sergefonville.nl Convince Google!! They need to support Adsense over SSL https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Can you use LVM with NFS? Is NFS not slower than iSCSI? ________________________________ From: Serge Fonville [mailto:serge.fonville@gmail.com] Sent: Mon 14/06/2010 17:41 To: Jonathan Tripathy Cc: Xen-users@lists.xensource.com Subject: Re: [Xen-users] iSCSI and LVM Well, it would be a lot easier (and just as cheap) to use NFS. The only downside to NFS might be that it is not accessible from Windows. But it is much simpler to setup. eaisier to maintain, faster to troubleshoot. test friendlier and a lot more used. So basically, what benefits does iSCSI offer you, that NFS doesn''t? HTH, Regards, Serge Fonville On Mon, Jun 14, 2010 at 6:08 PM, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote:> > > Also, why exactly have you chosen to use iSCSI? > > ----------------------------------------------------------------------------------- > > There is nothing else that I can afford. What other options are there?-- http://www.sergefonville.nl Convince Google!! They need to support Adsense over SSL https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Can you use LVM with NFS? Is NFS not slower than iSCSI?LVM over NFS is not possible. LVM needs to be applied to a blockdevice Fortunately, you can sitll use LVM on the storage server. NFS is often considered slower, due to that it adds an additional layer to the communication. This does not necessarily negatively impact the performance in such a way that it should be considered a deal-breaker. If you expect to constantly utilize over 70% of your bandwidth, you may be better of using iSCSI. Then again, if you are utilizing that much, you should probably rethink your setup. since I currently know very little about your expected load. I can not give you a definitive answer. But looking into using NFS for your VMs should at least be looked in to thoroughy. HTH Regards, Serge Fonville -- http://www.sergefonville.nl Convince Google!! They need to support Adsense over SSL https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Jun 14, 2010 at 11:41 AM, Serge Fonville <serge.fonville@gmail.com> wrote:> So basically, what benefits does iSCSI offer you, that NFS doesn''t?AFAIK, only _very_ high end NAS boxes offer NFS performance comparable to iSCSI. in DIY-land, it''s unusable for anything more than testing. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Monday 14 June 2010 17:59:26 Jonathan Tripathy wrote:> ________________________________ > > From: xen-users-bounces@lists.xensource.com on behalf of Serge Fonville > Sent: Mon 14/06/2010 16:53 > To: xen-users@lists.xensource.com > Subject: Re: [Xen-users] iSCSI and LVM > > > > Hi > > >> I am going to get a storage server which will be connected to my Xen > >> hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The > >> storage server will have a RAID10 array, and 2 Xen hosts will connect > >> to this (Each will have a 50% share of the RAID10 array, space wise). > >> > >> What is the best way to go about this? Should I: > >> > >> a) Split the RAID10 array into 2 partition on the storage server, and > >> export 1 partition to each xen host, then let the Xen host manage LVM? > >> > >> or > >> > >> b) Do all LVM stuff on the storage server and export each LVM logical > >> volume to the correct Xen hosts via iSCSI? Since each host could have > >> around 100 VMs on it, that''s a lot of iSCSI exporting! > > > > If you export two "big luns" and do the LVM stuff on each Xens server > > you will not be able to "switch" a VM from one server to the other (with > > xm save/restore for example). With that setup a VM can only be started > > on a specific server. Also you will have two LVM sets to manage. > > > > If you do all the storage management on your SAN migrate a VM from one > > server to the other will be easy and even done "live". Also the storage > > management will be "unified". The only problem could be the maximum > > number of iSCSI Luns on yours SAN, but it often is greater than 127 > > distinct units. > > > > Ok thanks for the advice. So I think I shall go down the root of doing > > all LVM stuff on the SAN. > > > > However, I think that I will need more than 127 virtual machines, so > > having an iSCSI target per VM is probably not an option. Is there any > > other solution for Ubuntu/Debian? > > Would it be an option to have one iSCSI target, used by multiple > clients where only one clients accesses a directory at a time?? > that way it would be way easier to mgrate a vm from one host to another. > > HTH > > Regards, > > Serge Fonville > > -- > http://www.sergefonville.nl > > Convince Google!! > They need to support Adsense over SSL > https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 > http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&h > l=en > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > --------------------------------------------------------------------------- > --------------------------------------------------------------------------- > --- > > So you''re saying that I could just have one bug LUN, which both xen hosts > could connect to, and as long as each host didn''t try to run the same DomU > as the same time, I''d be ok? Is this safe and done in industry? >A lot depends on whether you plan to use image files or block devices. Also if snapshotting comes into the equation is very releveant. In conclusion, whether you use cluster software with STONITH functionality is very relevant. B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Monday 14 June 2010 18:56:58 Serge Fonville wrote:> > Can you use LVM with NFS? Is NFS not slower than iSCSI? > > LVM over NFS is not possible. > LVM needs to be applied to a blockdevice > > Fortunately, you can sitll use LVM on the storage server. > > NFS is often considered slower, due to that it adds an additional > layer to the communication. > > This does not necessarily negatively impact the performance in such a > way that it should be considered a deal-breaker. > If you expect to constantly utilize over 70% of your bandwidth, you > may be better of using iSCSI. > Then again, if you are utilizing that much, you should probably > rethink your setup. > > since I currently know very little about your expected load. > I can not give you a definitive answer. > > But looking into using NFS for your VMs should at least be looked in > to thoroughy. > > HTH > > Regards, > > Serge Fonville >I suppose NFS requires image based access, which I understand is less performant. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> LVM over NFS is not possible. >> LVM needs to be applied to a blockdevice >> >> Fortunately, you can sitll use LVM on the storage server. >> >> NFS is often considered slower, due to that it adds an additional >> layer to the communication. >> >> This does not necessarily negatively impact the performance in such a >> way that it should be considered a deal-breaker. >> If you expect to constantly utilize over 70% of your bandwidth, you >> may be better of using iSCSI. >> Then again, if you are utilizing that much, you should probably >> rethink your setup. >> >> since I currently know very little about your expected load. >> I can not give you a definitive answer. >> >> But looking into using NFS for your VMs should at least be looked in >> to thoroughy. >> > I suppose NFS requires image based access, which I understand is less > performant. >you may also find http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.1830&rep=rep1&type=pdf interesting HTH Regards, Serge Fonville -- http://www.sergefonville.nl Convince Google!! They need to support Adsense over SSL https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 14/06/10 20:03, Serge Fonville wrote:>>> LVM over NFS is not possible. >>> LVM needs to be applied to a blockdevice >>> >>> Fortunately, you can sitll use LVM on the storage server. >>> >>> NFS is often considered slower, due to that it adds an additional >>> layer to the communication. >>> >>> This does not necessarily negatively impact the performance in such a >>> way that it should be considered a deal-breaker. >>> If you expect to constantly utilize over 70% of your bandwidth, you >>> may be better of using iSCSI. >>> Then again, if you are utilizing that much, you should probably >>> rethink your setup. >>> >>> since I currently know very little about your expected load. >>> I can not give you a definitive answer. >>> >>> But looking into using NFS for your VMs should at least be looked in >>> to thoroughy. >>> >>> >> I suppose NFS requires image based access, which I understand is less >> performant. >> >> > you may also find > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.1830&rep=rep1&type=pdf > interesting > > HTH > > Regards, > > Serge Fonville > > > >That is an interesting read, which says that NFS and iSCSI are nearly the same for reads. What is generally used in industry? At max capacity, my setup will hold up to 672 DomUs spread over 6 Xen hosts (And 3 RAID10 arrays on a single storage server), so clearly management is a big concern. This is where I feel that LVM/iSCSI based access is easier? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Monday 14 June 2010 22:57:00 Jonathan Tripathy wrote:> On 14/06/10 20:03, Serge Fonville wrote: > >>> LVM over NFS is not possible. > >>> LVM needs to be applied to a blockdevice > >>> > >>> Fortunately, you can sitll use LVM on the storage server. > >>> > >>> NFS is often considered slower, due to that it adds an additional > >>> layer to the communication. > >>> > >>> This does not necessarily negatively impact the performance in such a > >>> way that it should be considered a deal-breaker. > >>> If you expect to constantly utilize over 70% of your bandwidth, you > >>> may be better of using iSCSI. > >>> Then again, if you are utilizing that much, you should probably > >>> rethink your setup. > >>> > >>> since I currently know very little about your expected load. > >>> I can not give you a definitive answer. > >>> > >>> But looking into using NFS for your VMs should at least be looked in > >>> to thoroughy. > >> > >> I suppose NFS requires image based access, which I understand is less > >> performant. > > > > you may also find > > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.1830&rep=rep1& > >type=pdf interesting > > > > HTH > > > > Regards, > > > > Serge Fonville > > That is an interesting read, which says that NFS and iSCSI are nearly > the same for reads. > > What is generally used in industry? At max capacity, my setup will hold > up to 672 DomUs spread over 6 Xen hosts (And 3 RAID10 arrays on a single > storage server), so clearly management is a big concern. This is where I > feel that LVM/iSCSI based access is easier? > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >That sounds like an awful lot of DomUs per RAID. Have you tested this? Can the RAID I/O deal with this? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 14/06/10 22:17, Bart Coninckx wrote:> On Monday 14 June 2010 22:57:00 Jonathan Tripathy wrote: > >> On 14/06/10 20:03, Serge Fonville wrote: >> >>>>> LVM over NFS is not possible. >>>>> LVM needs to be applied to a blockdevice >>>>> >>>>> Fortunately, you can sitll use LVM on the storage server. >>>>> >>>>> NFS is often considered slower, due to that it adds an additional >>>>> layer to the communication. >>>>> >>>>> This does not necessarily negatively impact the performance in such a >>>>> way that it should be considered a deal-breaker. >>>>> If you expect to constantly utilize over 70% of your bandwidth, you >>>>> may be better of using iSCSI. >>>>> Then again, if you are utilizing that much, you should probably >>>>> rethink your setup. >>>>> >>>>> since I currently know very little about your expected load. >>>>> I can not give you a definitive answer. >>>>> >>>>> But looking into using NFS for your VMs should at least be looked in >>>>> to thoroughy. >>>>> >>>> I suppose NFS requires image based access, which I understand is less >>>> performant. >>>> >>> you may also find >>> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.1830&rep=rep1& >>> type=pdf interesting >>> >>> HTH >>> >>> Regards, >>> >>> Serge Fonville >>> >> That is an interesting read, which says that NFS and iSCSI are nearly >> the same for reads. >> >> What is generally used in industry? At max capacity, my setup will hold >> up to 672 DomUs spread over 6 Xen hosts (And 3 RAID10 arrays on a single >> storage server), so clearly management is a big concern. This is where I >> feel that LVM/iSCSI based access is easier? >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> >> > That sounds like an awful lot of DomUs per RAID. Have you tested this? Can the > RAID I/O deal with this? > >Nope, I havn''t tested this yet, however this is based on a "risk model" and will probably never reach that high. I''m basing my VM packages on a "points" system. The highest package is worth 8 points, the middle package is worth 4 points, and the smallest package is worth 1 point. RAM sizes are 1024MB, 512MB and 128MB respectively. The smallest package will only have a drive size of 6GB, and the internet connection will be limited as well, so I''m basing my figures on the fact that the smallest VMs probably won''t be used for high disk I/O use... _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> What is generally used in industry? At max capacity, my setup will hold > up to 672 DomUs spread over 6 Xen hosts (And 3 RAID10 arrays on a single > storage server), so clearly management is a big concern. This is where I > feel that LVM/iSCSI based access is easier? >Hello, I doubt that a single storage server should be able to "feed" so many VMs, even if they are not IO hungry, the network bandwidth would be very high and you will have to use very high speed network devices and routers/switches and a big SAN with many high speed links. Regards JPP _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
----- Original message -----> > On 14/06/10 22:17, Bart Coninckx wrote: > > On Monday 14 June 2010 22:57:00 Jonathan Tripathy wrote: > > > > > On 14/06/10 20:03, Serge Fonville wrote: > > > > > > > > > LVM over NFS is not possible. > > > > > > LVM needs to be applied to a blockdevice > > > > > > > > > > > > Fortunately, you can sitll use LVM on the storage server. > > > > > > > > > > > > NFS is often considered slower, due to that it adds an additional > > > > > > layer to the communication. > > > > > > > > > > > > This does not necessarily negatively impact the performance in such a > > > > > > way that it should be considered a deal-breaker. > > > > > > If you expect to constantly utilize over 70% of your bandwidth, you > > > > > > may be better of using iSCSI. > > > > > > Then again, if you are utilizing that much, you should probably > > > > > > rethink your setup. > > > > > > > > > > > > since I currently know very little about your expected load. > > > > > > I can not give you a definitive answer. > > > > > > > > > > > > But looking into using NFS for your VMs should at least be looked in > > > > > > to thoroughy. > > > > > > > > > > > I suppose NFS requires image based access, which I understand is less > > > > > performant. > > > > > > > > > you may also find > > > > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.1830&rep=rep1& > > > > type=pdf interesting > > > > > > > > HTH > > > > > > > > Regards, > > > > > > > > Serge Fonville > > > > > > > That is an interesting read, which says that NFS and iSCSI are nearly > > > the same for reads. > > > > > > What is generally used in industry? At max capacity, my setup will hold > > > up to 672 DomUs spread over 6 Xen hosts (And 3 RAID10 arrays on a single > > > storage server), so clearly management is a big concern. This is where I > > > feel that LVM/iSCSI based access is easier? > > > > > > _______________________________________________ > > > Xen-users mailing list > > > Xen-users@lists.xensource.com > > > http://lists.xensource.com/xen-users > > > > > > > > That sounds like an awful lot of DomUs per RAID. Have you tested this? Can the > > RAID I/O deal with this? > > > > > > Nope, I havn''t tested this yet, however this is based on a "risk model" > and will probably never reach that high. I''m basing my VM packages on a > "points" system. The highest package is worth 8 points, the middle > package is worth 4 points, and the smallest package is worth 1 point. > RAM sizes are 1024MB, 512MB and 128MB respectively. The smallest package > will only have a drive size of 6GB, and the internet connection will be > limited as well, so I''m basing my figures on the fact that the smallest > VMs probably won''t be used for high disk I/O use...It''s your call but i would definitely testdrive this first. People are going to put their websites on these and expect them top erform adequately- even for fiew bucks. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 14/06/10 23:15, jpp@jppozzi.dyndns.org wrote:> >> What is generally used in industry? At max capacity, my setup will hold >> up to 672 DomUs spread over 6 Xen hosts (And 3 RAID10 arrays on a single >> storage server), so clearly management is a big concern. This is where I >> feel that LVM/iSCSI based access is easier? >> >> > Hello, > > I doubt that a single storage server should be able to "feed" so many > VMs, even if they are not IO hungry, the network bandwidth would be very > high and you will have to use very high speed network devices and > routers/switches and a big SAN with many high speed links. > > Regards > > JPP > > >Well each VM on the low package was going to be limited to 1Mbp/s max, and the total setup would be connected to a 100Mbp/s connection. Of course, I could always ditch the idea of the low package and just offer the 2nd and 3rd packages (168 and 84 VMs respectively). You really think that there would be a big bottleneck will the low package (672 VMs)? Even with 3 RAID10 arrays covering them? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 14/06/10 23:47, Jonathan Tripathy wrote:> > On 14/06/10 23:15, jpp@jppozzi.dyndns.org wrote: >>> What is generally used in industry? At max capacity, my setup will hold >>> up to 672 DomUs spread over 6 Xen hosts (And 3 RAID10 arrays on a >>> single >>> storage server), so clearly management is a big concern. This is >>> where I >>> feel that LVM/iSCSI based access is easier? >>> >> Hello, >> >> I doubt that a single storage server should be able to "feed" so many >> VMs, even if they are not IO hungry, the network bandwidth would be very >> high and you will have to use very high speed network devices and >> routers/switches and a big SAN with many high speed links. >> >> Regards >> >> JPP >> >> > Well each VM on the low package was going to be limited to 1Mbp/s max, > and the total setup would be connected to a 100Mbp/s connection. Of > course, I could always ditch the idea of the low package and just > offer the 2nd and 3rd packages (168 and 84 VMs respectively). You > really think that there would be a big bottleneck will the low package > (672 VMs)? Even with 3 RAID10 arrays covering them? > >I should also mention that the Ethernet link between the storage server is as follows: Quad port bonded gigabit NIC from storage server to switch, then dual port bonded gigabit NIC from switch to each host. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
El 14/06/10 16:18, Jonathan Tripathy escribió:> I''m using Ubuntu, so I''m not sure if CLVM is the best option.. > Does anyone have any experience with "shared storage" using iSCSI with > Debian/Ubuntu Dom0s?Why it is a problem with debian/ubuntu? both have a package for clvm -- Angel L. Mateo Martínez Sección de Telemática Área de Tecnologías de la Información _o) y las Comunicaciones Aplicadas (ATICA) / \\ http://www.um.es/atica _(___V Tfo: 868887590 Fax: 868888337 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > El 14/06/10 16:18, Jonathan Tripathy escribió: > > I''m using Ubuntu, so I''m not sure if CLVM is the best option.. > > Does anyone have any experience with "shared storage" using iSCSI with > > Debian/Ubuntu Dom0s? > > Why it is a problem with debian/ubuntu? both have a package for clvm >If you use clvm, there are patches floating around to make it use openais instead of all the overhead you need for the default lock manager. http://h2o.glou.fr/post/2009/04/20/clvm-openais-on-Debian/Lenny is what I used. I''m using drbd not iSCSI but that shouldn''t matter. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
----- Original message -----> > > > El 14/06/10 16:18, Jonathan Tripathy escribió: > > > I''m using Ubuntu, so I''m not sure if CLVM is the best option.. > > > Does anyone have any experience with "shared storage" using iSCSI with > > > Debian/Ubuntu Dom0s? > > > > Why it is a problem with debian/ubuntu? both have a package for clvm > > > > If you use clvm, there are patches floating around to make it use openais > instead of all the overhead you need for the default lock manager. > > http://h2o.glou.fr/post/2009/04/20/clvm-openais-on-Debian/Lenny is what I used. > I''m using drbd not iSCSI but that shouldn''t matter. > > James > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-usersYou don''t have snapshotting though. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > > If you use clvm, there are patches floating around to make it use openais > > instead of all the overhead you need for the default lock manager. > > > > http://h2o.glou.fr/post/2009/04/20/clvm-openais-on-Debian/Lenny is what I > used. > > I'm using drbd not iSCSI but that shouldn't matter. > > > > You don't have snapshotting though.Yes, and that is quite sad. In my case I am doing lvm on drbd giving me one big pv which I then slice up. I'm experimenting with drbd on lvm, so on each node I create a volume and then run drbd on that. I don't get easy resizing, and a bit more fiddly (eg need to create the volumes on both nodes then add a new drbd resource) but could create a snapshot on one node independent of the other which would satisfy my requirements. In the case of iSCSI you would just create an iSCSI device for each LV instead of running lvm on top of your iSCSI volume. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
In the case of iSCSI you would just create an iSCSI device for each LV instead of running lvm on top of your iSCSI volume. James --------------------------------------------------------------------------------- Does that not mean that I would have to export nearly 600 LUNs? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > In the case of iSCSI you would just create an iSCSI device for each LVinstead> of running lvm on top of your iSCSI volume. > > James > >------------------------------------------------------------------------ ------> --- > > Does that not mean that I would have to export nearly 600 LUNs? >If you have 600 lv''s then yes, and that may well be a better option. With 600 lv''s all running on the same vg, clvm performance if snapshotting was ever implemented would suck terribly - every time the lv was written to and the snapshot received a copy of the original block, all other nodes would need to know about the new metadata change or they would read bad data from the snapshot. I don''t know what the per-iSCSI-LUN overhead is vs the clvm overhead though... I guess it depends on how many nodes you have. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: James Harper [mailto:james.harper@bendigoit.com.au] Sent: Tue 15/06/2010 11:33 To: Jonathan Tripathy; xen-users@lists.xensource.com Subject: RE: [Xen-users] iSCSI and LVM> > In the case of iSCSI you would just create an iSCSI device for each LVinstead> of running lvm on top of your iSCSI volume. > > James > >------------------------------------------------------------------------ ------> --- > > Does that not mean that I would have to export nearly 600 LUNs? >If you have 600 lv''s then yes, and that may well be a better option. With 600 lv''s all running on the same vg, clvm performance if snapshotting was ever implemented would suck terribly - every time the lv was written to and the snapshot received a copy of the original block, all other nodes would need to know about the new metadata change or they would read bad data from the snapshot. I don''t know what the per-iSCSI-LUN overhead is vs the clvm overhead though... I guess it depends on how many nodes you have. James ------------------------------------------------------------------------ Hi James, I would have 6 nodes running 100 small (6GB HDD/128MB RAM) VMs each. Do you still think that exporting so many LUNs would be a good idea? If I wanted to forgoe "total migration", I could just export 1 big LUN and do LVM in the xen host? And then if the server went down, it would be just a matter of connecting to that big LUN from another xen server? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"Jonathan Tripathy" <jonnyt@abpni.co.uk> writes:> Does anyone have any experience with "shared storage" using iSCSI with > Debian/Ubuntu Dom0s?Yes. We use one iSCSI export for each domU, shared by two dom0s for failover. Each domU uses LVM if it wants, but neither does it for snapshotting alone: they are regular backup clients, no magic there. You''ll have to check if your SAN supports this high number of exports. Your plan is *very* ambitious anyway. -- Regards, Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: Ferenc Wagner [mailto:wferi@niif.hu] Sent: Tue 15/06/2010 12:39 To: Jonathan Tripathy Cc: xen-users@lists.xensource.com Subject: Re: iSCSI and LVM "Jonathan Tripathy" <jonnyt@abpni.co.uk> writes:> Does anyone have any experience with "shared storage" using iSCSI with > Debian/Ubuntu Dom0s?Yes. We use one iSCSI export for each domU, shared by two dom0s for failover. Each domU uses LVM if it wants, but neither does it for snapshotting alone: they are regular backup clients, no magic there. You''ll have to check if your SAN supports this high number of exports. Your plan is *very* ambitious anyway. -- Regards, Feri. ---------------------------------------------------------------------------------------- My current train of though is to just export one bigh LUN to each node, and let the node handle LVM. While I coudn''t use "live migration", I could always mount the big lun on another server if the orignal were to fail Can you please explain to me how my plan is ambitious? Can someone please suggest where I should cut down/ scale up? Many Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"Jonathan Tripathy" <jonnyt@abpni.co.uk> writes:> From: Ferenc Wagner [mailto:wferi@niif.hu] > >> "Jonathan Tripathy" <jonnyt@abpni.co.uk> writes: >> >>> Does anyone have any experience with "shared storage" using iSCSI with >>> Debian/Ubuntu Dom0s? >> >> Yes. We use one iSCSI export for each domU, shared by two dom0s for >> failover. Each domU uses LVM if it wants, but neither does it for >> snapshotting alone: they are regular backup clients, no magic there. >> You''ll have to check if your SAN supports this high number of exports. >> Your plan is *very* ambitious anyway. > > My current train of though is to just export one bigh LUN to each > node, and let the node handle LVM. While I coudn''t use "live > migration", I could always mount the big lun on another server if the > orignal were to fail.You can use live migration in such setup, even safely if you back it by clvm. You can even live without clvm if you deactivate your VG on all but a single dom0 before changing the LVM metadata in any way. A non-clustered VG being active on multiple dom0s isn''t a problem in itself and makes live migration possible, but you''d better understand what you''re doing.> Can you please explain to me how my plan is ambitious? Can someone > please suggest where I should cut down/ scale up?Even 100 domUs on a single dom0 is quite a lot. 100 Mbit/s upstream bandwidth isn''t much. You''ll have to tune your iSCSI carefully to achieve reasonable I/O speeds, which is limited by your total storage speed. Even if your domUs don''t do much I/O, 128 MB of memory is pretty much a minimum for each, 128 of those require 16 GB of dom0 memory (this is probably the easiest requirement to accomodate). -- Good luck, Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
You can use live migration in such setup, even safely if you back it by clvm. You can even live without clvm if you deactivate your VG on all but a single dom0 before changing the LVM metadata in any way. A non-clustered VG being active on multiple dom0s isn''t a problem in itself and makes live migration possible, but you''d better understand what you''re doing.> Can you please explain to me how my plan is ambitious? Can someone > please suggest where I should cut down/ scale up?Even 100 domUs on a single dom0 is quite a lot. 100 Mbit/s upstream bandwidth isn''t much. You''ll have to tune your iSCSI carefully to achieve reasonable I/O speeds, which is limited by your total storage speed. Even if your domUs don''t do much I/O, 128 MB of memory is pretty much a minimum for each, 128 of those require 16 GB of dom0 memory (this is probably the easiest requirement to accomodate). ------------------------------------------------------------------------------------------------ Can you please explain the steps I would need to take in order to connect multipl clients to a single iSCSI target? I was thinking of using LVM on the storage server to split my RAID array in 2 big LVs, and then export one LV to a node. Then the xen node would use LVM within this exported LV. to split it up into small LVs for the DomUs. Is this a good or bad idea? The 100 Mbit/s upstream is for the internet connection. The bandwidth to the iSCSI server is dual bonded gigabit ethernet. What tuning could i do to the iSCSI setup? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > You can use live migration in such setup, even safely if you back itby> clvm. You can even live without clvm if you deactivate your VG on all > but a single dom0 before changing the LVM metadata in any way. A > non-clustered VG being active on multiple dom0s isn''t a problem in > itself and makes live migration possible, but you''d better understand > what you''re doing. >You can''t snapshot though. I tried that once years ago and it made a horrible mess. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: James Harper [mailto:james.harper@bendigoit.com.au] Sent: Tue 15/06/2010 14:10 To: Ferenc Wagner; Jonathan Tripathy Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Re: iSCSI and LVM> > You can use live migration in such setup, even safely if you back itby> clvm. You can even live without clvm if you deactivate your VG on all > but a single dom0 before changing the LVM metadata in any way. A > non-clustered VG being active on multiple dom0s isn''t a problem in > itself and makes live migration possible, but you''d better understand > what you''re doing. >You can''t snapshot though. I tried that once years ago and it made a horrible mess. James ---------------------------------------------------------------------------- This is why I''m thinking to just use one big LUN for each xen node, and just manually connect another xen host to the LUN if the orignal server were to fail. The Dom0s could handle the snapshotting... _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"James Harper" <james.harper@bendigoit.com.au> writes:>> You can use live migration in such setup, even safely if you back it >> by clvm. You can even live without clvm if you deactivate your VG on >> all but a single dom0 before changing the LVM metadata in any way. A >> non-clustered VG being active on multiple dom0s isn''t a problem in >> itself and makes live migration possible, but you''d better understand >> what you''re doing. > > You can''t snapshot though. I tried that once years ago and it made a > horrible mess.Even if done after deactivating the VG on all but a single node? That would be a bug. According to my understanding, it should work. I never tried, though, as snapshotting isn''t my preferred way of making backups. On the other hand I run domUs on snapshots of local LVs without any problem. And an LV being "local" is a concept beyond LVM in the above setting, so it can''t matter... -- Thanks, Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> "James Harper" <james.harper@bendigoit.com.au> writes: > > >> You can use live migration in such setup, even safely if you backit> >> by clvm. You can even live without clvm if you deactivate your VGon> >> all but a single dom0 before changing the LVM metadata in any way.A> >> non-clustered VG being active on multiple dom0s isn''t a problem in > >> itself and makes live migration possible, but you''d betterunderstand> >> what you''re doing. > > > > You can''t snapshot though. I tried that once years ago and it made a > > horrible mess. > > Even if done after deactivating the VG on all but a single node? That > would be a bug. According to my understanding, it should work. Inever> tried, though, as snapshotting isn''t my preferred way of makingbackups.> On the other hand I run domUs on snapshots of local LVs without any > problem. And an LV being "local" is a concept beyond LVM in the above > setting, so it can''t matter...A snapshot is copy-on-write. Every time the ''source'' is written to, a copy of the original block is saved to the snapshot (I may have that the wrong way around). This allows snapshots to be pretty much instant as very little data is manipulated. It also saves a lot of space. Doing that though involves a remapping of the snapshot every time the source is written to (eg block x isn''t in the ''source'' anymore, so storage is allocated to it etc) which involves a metadata update. So if the VG remained deactivated on all nodes for the life of the snapshot then it may work, and maybe this is what you meant in which case you are correct. If the activated the VG on the other nodes after creating the snapshot though, then problems may (will) arise! James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"Jonathan Tripathy" <jonnyt@abpni.co.uk> writes:>> You can use live migration in such setup, even safely if you back it >> by clvm. You can even live without clvm if you deactivate your VG on >> all but a single dom0 before changing the LVM metadata in any way. A >> non-clustered VG being active on multiple dom0s isn''t a problem in >> itself and makes live migration possible, but you''d better understand >> what you''re doing. >> >>> Can you please explain to me how my plan is ambitious? Can someone >>> please suggest where I should cut down/ scale up? >> >> Even 100 domUs on a single dom0 is quite a lot. 100 Mbit/s upstream >> bandwidth isn''t much. You''ll have to tune your iSCSI carefully to >> achieve reasonable I/O speeds, which is limited by your total storage >> speed. Even if your domUs don''t do much I/O, 128 MB of memory is >> pretty much a minimum for each, 128 of those require 16 GB of dom0 >> memory (this is probably the easiest requirement to accomodate). > > Can you please explain the steps I would need to take in order to > connect multipl clients to a single iSCSI target?No steps necessary, this is the usual mode of operation.> I was thinking of using LVM on the storage server to split my RAID > array in 2 big LVs, and then export one LV to a node. Then the xen > node would use LVM within this exported LV. to split it up into small > LVs for the DomUs. Is this a good or bad idea?You can do this if you aren''t interested in live migration. Otherwise, I find it pointless.> The 100 Mbit/s upstream is for the internet connection. The bandwidth > to the iSCSI server is dual bonded gigabit ethernet. What tuning could > i do to the iSCSI setup?MTU, number of outstanding requests, just to name a few. Google for it. And for all the rest as well. I think you aren''t well prepared for this task yet, be very careful. -- Regards, Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
In spite of all the helpful advice here, I have decided to change my VPS plans slightly. Each node will run a max of 56 VMs, however this number would be closer to 25 I would imagine. Each node will be connected to a storage server via a dual NIC bonded gigabit ethernet connection. The storage server will have 3 RAID10 arrays. Each RAID10 array will server 2 xen nodes (So 6 xen nodes will connect to one storage server). There will also be 4 "Hot Spare" disks. This means that there will be a max of 336 VMs per storage server, however this number would be closer to 150 I would imagine. The storage server will connect to the network via 4 Quad bonded gigabit NICs. The Raid controller in the storage server will be a LSI 9260-4I 6G. The storage server will export 1/2 of a RAID10 array (split via LVM) via iSCSI to each xen node. The Xen node will do its own LVM splitting for each DomU. Live migration can''t be done, however manually moving an entire LUN to another server is easy. Do the above plans sound better? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> I''m using Ubuntu, so I''m not sure if CLVM is the best option..It depends on what you want. I use CLVM on Debian for managing volumes on my storage arrays (not iSCSI but ATA-over-Ethernet). And it works just fine. :-)> Does anyone have any experience with "shared storage" using iSCSI with Debian/ > Ubuntu Dom0s?Me; I did not use the pre-compiled clvm as it uses Redhat''s clusterstack (cman) but compiled the packages myself. There are some guides[1] on how to do this. My main concern against CMAN is the fencing strategy (STONITH - shoot the other node in the head) which is -- except for being cruel -- beside the point of running a Dom0: Just in case one node looses connection to the cluster, I do _not_ want it to get killed. For me it is enough to be sure, all modifications done to the LVM volumes are only done by nodes which are in possession of the quorum. So what CLVM compiled with OpenAIS does is just fine. Plus: the setup is way easier. ;-) Up to now I failed get cmirrord working reliably with OpenAIS. Probably someone else has experience with this? -- Adi [1] http://h2o.glou.fr/post/2009/04/20/clvm-openais-on-Debian/Lenny _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"James Harper" <james.harper@bendigoit.com.au> writes:>> "James Harper" <james.harper@bendigoit.com.au> writes: >> >>>> You can use live migration in such setup, even safely if you back >>>> it by clvm. You can even live without clvm if you deactivate your >>>> VG on all but a single dom0 before changing the LVM metadata in any >>>> way. A non-clustered VG being active on multiple dom0s isn''t a >>>> problem in itself and makes live migration possible, but you''d >>>> better understand what you''re doing. >>> >>> You can''t snapshot though. I tried that once years ago and it made a >>> horrible mess. >> >> Even if done after deactivating the VG on all but a single node? >> That would be a bug. According to my understanding, it should work. >> I never tried, though, as snapshotting isn''t my preferred way of >> making backups. On the other hand I run domUs on snapshots of local >> LVs without any problem. And an LV being "local" is a concept beyond >> LVM in the above setting, so it can''t matter... > > A snapshot is copy-on-write. Every time the ''source'' is written to, a > copy of the original block is saved to the snapshot (I may have that the > wrong way around).It''s a little bit more complicated, but the basic idea is this.> Doing that though involves a remapping of the snapshot every time the > source is written to (eg block x isn''t in the ''source'' anymore, so > storage is allocated to it etc) which involves a metadata update.No, operation of the snapshot doesn''t involve continuous *LVM* metadata updates, even though the chunk mapping is really metadata with respect to the block devices themselves.> So if the VG remained deactivated on all nodes for the life of the > snapshot then it may work, and maybe this is what you meant in which > case you are correct.Yes, I didn''t elaborate, but this is my advice.> If the activated the VG on the other nodes after creating the snapshot > though, then problems may (will) arise!Only if you access data in the same LV from different hosts (metadata updates are also excluded, of course). From this point of view, the origin and the snapshot LVs (and the cow device) must be considered the "same" LV. Basically, this is why clvm does not support snapshots. And of course I didn''t consider cluster filesystems and similar above. I think we''re pretty much on the same page. -- Regards, Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Jonathan! I think, You can look to OpenSolaris/Nexenta for iSCSI NAS. This solution can help You to solve many problems with backup (zfs snapshots not degrading disk''s performance and taking it is very easy and automatically). Also Nexenta have very good www admin interface , then You can export separate iSCSI LUN for each DomU and can easy managing it.> Hi Everyone, > I am going to get a storage server which will be connected to my Xen > hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The > storage server will have a RAID10 array, and 2 Xen hosts will connect > to this (Each will have a 50% share of the RAID10 array, space wise). > What is the best way to go about this? Should I: > a) Split the RAID10 array into 2 partition on the storage server, and > export 1 partition to each xen host, then let the Xen host manage LVM? > or > b) Do all LVM stuff on the storage server and export each LVM logical > volume to the correct Xen hosts via iSCSI? Since each host could have > around 100 VMs on it, that''s a lot of iSCSI exporting! > Any help is appreciated > Thanks > Jonathan > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Jun 15, 2010 at 3:30 PM, Igor Shalakhin <igor@igsbox.ru> wrote:> zfs snapshots not degrading disk''s performancehard to beleive -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Don''t beleive, try it! :) Last 2 years I work with Xen on top of OpenSolaris and have no problem with zfs performance. Now I work with Xen in Linux and I think, that Linux not have and in near future not would have such handy and cosidered filesystem like zfs. Little expample: With zfs I can create sparse block device in filesystem''s subtree, attach it to DomU as phy:/ device, define scheduled snapshoting of it and make incremental(!) backup of block device to other system. And this all by 5-6 standart system''s commands. And so with LVM? Simple impossible... Unfortunately, Solaris not have DRBD technology. 16.06.2010 1:28, Javier Guerra Giraldez wrote:> On Tue, Jun 15, 2010 at 3:30 PM, Igor Shalakhin<igor@igsbox.ru> wrote: > >> zfs snapshots not degrading disk''s performance >> > hard to beleive > >-- Igor Shalakhin igor@igsbox.ru _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jun 16, 2010 at 3:15 AM, Igor Shalakhin <igor@igsbox.ru> wrote:> Don''t beleive, try it! :)my point is that ZFS snapshots also do copy-on-write, so there _is_ a performance degradation. unless it is doing them even when there''s no snapshot in place. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> unless it is doing them even when there''s no snapshot in place. > > -- > Javierin fact that''s exactly right. ZFS is build around copy-on-write semantics. snapshot writes are no more expensive than filesystem writes. - jonathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jun 16, 2010 at 10:18 AM, Jonathan Dye <jdye@adaptivecomputing.com> wrote:> in fact that''s exactly right. ZFS is build around copy-on-write semantics. snapshot writes are no more expensive than filesystem writes.it does COW on metadata. normal writes shouldn''t copy since there''s nothing to preserve. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tuesday 15 June 2010 16:34:35 Ferenc Wagner wrote:> "James Harper" <james.harper@bendigoit.com.au> writes: > >> "James Harper" <james.harper@bendigoit.com.au> writes: > >>>> You can use live migration in such setup, even safely if you back > >>>> it by clvm. You can even live without clvm if you deactivate your > >>>> VG on all but a single dom0 before changing the LVM metadata in any > >>>> way. A non-clustered VG being active on multiple dom0s isn''t a > >>>> problem in itself and makes live migration possible, but you''d > >>>> better understand what you''re doing. > >>> > >>> You can''t snapshot though. I tried that once years ago and it made a > >>> horrible mess. > >> > >> Even if done after deactivating the VG on all but a single node? > >> That would be a bug. According to my understanding, it should work. > >> I never tried, though, as snapshotting isn''t my preferred way of > >> making backups. On the other hand I run domUs on snapshots of local > >> LVs without any problem. And an LV being "local" is a concept beyond > >> LVM in the above setting, so it can''t matter... > > > > A snapshot is copy-on-write. Every time the ''source'' is written to, a > > copy of the original block is saved to the snapshot (I may have that the > > wrong way around). > > It''s a little bit more complicated, but the basic idea is this. > > > Doing that though involves a remapping of the snapshot every time the > > source is written to (eg block x isn''t in the ''source'' anymore, so > > storage is allocated to it etc) which involves a metadata update. > > No, operation of the snapshot doesn''t involve continuous *LVM* metadata > updates, even though the chunk mapping is really metadata with respect > to the block devices themselves. > > > So if the VG remained deactivated on all nodes for the life of the > > snapshot then it may work, and maybe this is what you meant in which > > case you are correct. > > Yes, I didn''t elaborate, but this is my advice. > > > If the activated the VG on the other nodes after creating the snapshot > > though, then problems may (will) arise! > > Only if you access data in the same LV from different hosts (metadata > updates are also excluded, of course). From this point of view, the > origin and the snapshot LVs (and the cow device) must be considered the > "same" LV. Basically, this is why clvm does not support snapshots. And > of course I didn''t consider cluster filesystems and similar above. > > I think we''re pretty much on the same page. >I would like to especially for Jonathan add that snapshotting of virtual machines does not provide a safe way of backing them up, unless they are shut down first. B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users