Hi, I''m curious about how you guys deal with big virtualization installations. To this date we only dealt with a small number of VM''s (~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the "storage guy" I find it quite convenient to present to the dom0s one LUN per VM that makes live migration possible but without the cluster file system or cLVM complexity. The thing is that Linux has a 255 SCSI device limit apparently (255/2= ~128 with multipath) and that won''t scale in big installations (300 VMs for example). Any experiences on this scale? Regards, -- Ciro Iriarte http://cyruspy.wordpress.com -- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/9/9 Ciro Iriarte <cyruspy@gmail.com>:> Hi, I''m curious about how you guys deal with big virtualization > installations. To this date we only dealt with a small number of VM''s > (~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the > "storage guy" I find it quite convenient to present to the dom0s one > LUN per VM that makes live migration possible but without the cluster > file system or cLVM complexity. The thing is that Linux has a 255 SCSI > device limit apparently (255/2= ~128 with multipath) and that won''t > scale in big installations (300 VMs for example). > > Any experiences on this scale? > > Regards, > > -- > Ciro Iriarte > http://cyruspy.wordpress.com > -- >Also, what about backups?. Currently I do dd images of the OS LUNs and regular backups of data (with backup application agent on domUs), that won''t scale well. The VMWare admin around here always brags about the backup API VMWare has, how far are we from something like that?... Regards, -- Ciro Iriarte http://cyruspy.wordpress.com -- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, On Fri, Sep 9, 2011 at 9:07 PM, Ciro Iriarte <cyruspy@gmail.com> wrote:> The thing is that Linux has a 255 SCSI > device limit apparently (255/2= ~128 with multipath) and that won''t > scale in big installations (300 VMs for example).I don''t think Linux >2.4 has such a limitation. I administer servers which have 5-600 LUNs * 4 (multipath) but a short googling showed me a limit of 2^32 (max_scsi_luns, max_luns) Ervin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 09/09/2011 03:07 PM, Ciro Iriarte wrote:> Hi, I''m curious about how you guys deal with big virtualization > installations. To this date we only dealt with a small number of VM''s > (~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the > "storage guy" I find it quite convenient to present to the dom0s one > LUN per VM that makes live migration possible but without the cluster > file system or cLVM complexity. The thing is that Linux has a 255 SCSI > device limit apparently (255/2= ~128 with multipath) and that won''t > scale in big installations (300 VMs for example). > > Any experiences on this scale?I run a couple hundred VM''s across a handful of blades. I recommend going to fewer, larger LUNs, carving them up with LVM, and handing out LV''s to your VM''s. You don''t actually need cLVM to do this! All the cluster infra does (for its nasty administrative overhead) is keep the LVM metadata (not your actual data) consistent through cluster-wide locks. You can manage yourself by, for example, making changes on one node and refreshing the other nodes with things like ''vgscan -ay''. I typically allocate LUNs 500GB at a time. You can invent means of keeping things consistent that work for your environment, just test them first. If you really want data security -- preventing one node from hosing data that''s accessible from another node -- you''ll have to go with a cluster filesystem. That still doesn''t help keep your multipathing and LVM configs consistent though, so I think you''re better off just skipping that step. John -- John Madden / Sr UNIX Systems Engineer Office of Technology / Ivy Tech Community College of Indiana Free Software is a matter of liberty, not price. To understand the concept, you should think of Free as in ''free speech,'' not as in ''free beer.'' -- Richard Stallman _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/9/9 Ervin Novak <enovak@opensuse.hu>:> Hi, > > On Fri, Sep 9, 2011 at 9:07 PM, Ciro Iriarte <cyruspy@gmail.com> wrote: >> The thing is that Linux has a 255 SCSI >> device limit apparently (255/2= ~128 with multipath) and that won''t >> scale in big installations (300 VMs for example). > > I don''t think Linux >2.4 has such a limitation. I administer servers > which have 5-600 LUNs * 4 (multipath) but a short googling showed me a > limit of 2^32 (max_scsi_luns, max_luns) > > Ervin >Sound great, any special configuration needed on the HBA driver side? Regards, -- Ciro Iriarte http://cyruspy.wordpress.com -- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/9/9 John Madden <jmadden@ivytech.edu>:> On 09/09/2011 03:07 PM, Ciro Iriarte wrote: >> >> Hi, I''m curious about how you guys deal with big virtualization >> installations. To this date we only dealt with a small number of VM''s >> (~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the >> "storage guy" I find it quite convenient to present to the dom0s one >> LUN per VM that makes live migration possible but without the cluster >> file system or cLVM complexity. The thing is that Linux has a 255 SCSI >> device limit apparently (255/2= ~128 with multipath) and that won''t >> scale in big installations (300 VMs for example). >> >> Any experiences on this scale? > > I run a couple hundred VM''s across a handful of blades. I recommend going > to fewer, larger LUNs, carving them up with LVM, and handing out LV''s to > your VM''s. You don''t actually need cLVM to do this! All the cluster infra > does (for its nasty administrative overhead) is keep the LVM metadata (not > your actual data) consistent through cluster-wide locks. You can manage > yourself by, for example, making changes on one node and refreshing the > other nodes with things like ''vgscan -ay''. I typically allocate LUNs 500GB > at a time. You can invent means of keeping things consistent that work for > your environment, just test them first. > > If you really want data security -- preventing one node from hosing data > that''s accessible from another node -- you''ll have to go with a cluster > filesystem. That still doesn''t help keep your multipathing and LVM configs > consistent though, so I think you''re better off just skipping that step. > > John > >Any advantage on using large luns+LVM instead of independent LUNs appart from snapshots? (according to Novell support LVM on top of LVM is a bad thing...). I remember reading that Xen itself implements some kind of locking... Regards, -- Ciro Iriarte http://cyruspy.wordpress.com -- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> You don't actually need cLVM to do this! All the > cluster infra does (for its nasty administrative overhead) is keep the > LVM metadata (not your actual data) consistent through cluster-wide > locks. You can manage yourself by, for example, making changes on one > node and refreshing the other nodes with things like 'vgscan -ay'. I > typically allocate LUNs 500GB at a time. You can invent means of > keeping things consistent that work for your environment, just test them > first.Another thing cLVM does is enforce the restriction on snapshots. Without cLVM you could create a snapshot on one node and completely trash the volume. I believe the CoW action of LVM snapshots involves a metadata update so you can't use it in a clustered environment. This is why I use LVM on the SAN rather than on the DomU. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I forgot to include the list… Am 10.09.2011 um 09:18 schrieb James Harper:>> Am 10.09.2011 um 02:23 schrieb "James Harper" > <james.harper@bendigoit.com.au>: >> >>>> You don''t actually need cLVM to do this! All the >>>> cluster infra does (for its nasty administrative overhead) is keep > the >>>> LVM metadata (not your actual data) consistent through cluster-wide >>>> locks. You can manage yourself by, for example, making changes on > one >>>> node and refreshing the other nodes with things like ''vgscan -ay''. > I >>>> typically allocate LUNs 500GB at a time. You can invent means of >>>> keeping things consistent that work for your environment, just test > them >>>> first. >>> >>> Another thing cLVM does is enforce the restriction on snapshots. > Without >> cLVM you could create a snapshot on one node and completely trash the > volume. >> I believe the CoW action of LVM snapshots involves a metadata update > so you >> can''t use it in a clustered environment. This is why I use LVM on the > SAN >> rather than on the DomU. >>> >> That is no longer true. CLVM supports clustered snapshots. I use it > already. >> But there is another limitation. You have to open the LV in exclusive > mode >> (lvchange -aey). It is enough to compile CLVM with openais and you > have your >> Cluster running in minutes. >> > > That''s great news! I was checking the web every so often to see if this > feature had been added yet but hadn''t recently. >If you use Debian, I can send a link where you can download my packages.> Thanks > > JamesChristian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>>> That is no longer true. CLVM supports clustered snapshots. I use it >> already.When i do a live migration of a guest that has its data on a cLVM volume, does that work? I mean, wouldnt some kind of "deactivate volume at source" and "activate volume at target" have to be incorporated into the xl(or xm/xend) toolstacks? Wouldnt the guest otherwise be trasferred to a host that cannot give it full access to its disk? -- Andreas Olsowski Leuphana Universität Lüneburg Rechen- und Medienzentrum Scharnhorststraße 1, C7.015 21335 Lüneburg Tel: ++49 4131 677 1309 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am 10.09.2011 um 13:39 schrieb Andreas Olsowski:>>>> That is no longer true. CLVM supports clustered snapshots. I use it >>> already. > When i do a live migration of a guest that has its data on a cLVM volume, does that work? I mean, wouldnt some kind of "deactivate volume at source" and "activate volume at target" have to be incorporated into the xl(or xm/xend) toolstacks? >I have not tested live migration, but I am sure that it would not work. I believe that xl/xm can not deactivate the LV on one host and activate it on the other. I afraid that xm does not even know that the VM uses LVs as disk, xm can only distinguish between file and block device. But that is only a guess from me.> Wouldnt the guest otherwise be trasferred to a host that cannot give it full access to its disk? > > -- > Andreas Olsowski > Leuphana Universität Lüneburg > Rechen- und Medienzentrum > Scharnhorststraße 1, C7.015 > 21335 Lüneburg > > Tel: ++49 4131 677 1309 >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Samstag, den 10.09.2011, 10:42 +0200 schrieb Christian Motschke: I forgot to include the list… >> That is no longer true. CLVM supports clustered snapshots. I use it > already. >> But there is another limitation. You have to open the LV in exclusive > mode >> (lvchange -aey). It is enough to compile CLVM with openais and you > have your >> Cluster running in minutes. >> > > That's great news! I was checking the web every so often to see if this > feature had been added yet but hadn't recently. > If you use Debian, I can send a link where you can download my packages. Christian, I've already done some clvm/openais packages for lenny (i'm not going to dist-ugrade that monocultural dom0 "cloud" until EOL, for obvious reasons) My packages were built above openais 0.84 and clvm 2.02.39-8. If your's are newer (maybe 2.02.46 or higher) AND match lenny dependencies (especially the whole DM stuff), I'ld really like to check them out. If possible, I'ld prefer deb-src instead of deb ;) Thanks, Stephan > Thanks > > James Christian -- Stephan Seitz Senior System Administrator netz-haut GmbH multimediale kommunikation Zweierweg 22 97074 Würzburg Telefon: 0931 2876247 Telefax: 0931 2876248 Web: www.netz-haut.de Amtsgericht Würzburg – HRB 10764 Geschäftsführer: Michael Daut, Kai Neugebauer _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Viele Grüße. Christian Am 11.09.2011 um 00:06 schrieb "netz-haut - stephan seitz" <s.seitz@netz-haut.de>:> > > Am Samstag, den 10.09.2011, 10:42 +0200 schrieb Christian Motschke: >> >> I forgot to include the list… >> >> That is no longer true. CLVM supports clustered snapshots. I use it >> > already. >> >> But there is another limitation. You have to open the LV in exclusive >> > mode >> >> (lvchange -aey). It is enough to compile CLVM with openais and you >> > have your >> >> Cluster running in minutes. >> >> >> > >> > That''s great news! I was checking the web every so often to see if this >> > feature had been added yet but hadn''t recently. >> > >> If you use Debian, I can send a link where you can download my packages. > Christian, > > I''ve already done some clvm/openais packages for lenny (i''m not going to dist-ugrade that monocultural dom0 "cloud" until EOL, for obvious reasons) > My packages were built above openais 0.84 and clvm 2.02.39-8. > If your''s are newer (maybe 2.02.46 or higher) AND match lenny dependencies (especially the whole DM stuff), I''ld really like to check them out. If possible, I''ld prefer deb-src instead of deb ;) >I use corosync and openais (1.4?) from unstable and have compiled the whole LVM myself. I use the packages under squeeze, because there are no other dependencies. LVM is 2.02.88. Source and binary packages can be downloaded from www.motschke.de/debian. Maybe you can compile the sources under lenny.> Thanks, > > Stephan > > > > >> > Thanks >> > >> > James >> >> Christian > -- > > Stephan Seitz > Senior System Administrator > > netz-haut GmbH > multimediale kommunikation > > Zweierweg 22 > 97074 Würzburg > > Telefon: 0931 2876247 > Telefax: 0931 2876248 > > Web: www.netz-haut.de > > Amtsgericht Würzburg – HRB 10764 > Geschäftsführer: Michael Daut, Kai Neugebauer > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Any advantage on using large luns+LVM instead of independent LUNs > appart from snapshots? (according to Novell support LVM on top of LVM > is a bad thing...). I remember reading that Xen itself implements some > kind of locking...I think easier management is the key. If you''re already managing the SAN and assigning LUNs to your boxen, then managing multipath.conf across your cluster, it''s nice to only do that 4 times for a couple TB rather than once for each VM, for example. John -- John Madden / Sr UNIX Systems Engineer Office of Technology / Ivy Tech Community College of Indiana Free Software is a matter of liberty, not price. To understand the concept, you should think of Free as in ''free speech,'' not as in ''free beer.'' -- Richard Stallman _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 09/10/2011 07:39 AM, Andreas Olsowski wrote:>>>> That is no longer true. CLVM supports clustered snapshots. I use it >>> already. > When i do a live migration of a guest that has its data on a cLVM > volume, does that work? I mean, wouldnt some kind of "deactivate volume > at source" and "activate volume at target" have to be incorporated into > the xl(or xm/xend) toolstacks?Live migration works entirely independently of the disk technology under the covers. NFS, LVM, iSCSI, FC, OCFS2 -- live migration doesn''t care and works the same on all of them. John -- John Madden / Sr UNIX Systems Engineer Office of Technology / Ivy Tech Community College of Indiana Free Software is a matter of liberty, not price. To understand the concept, you should think of Free as in ''free speech,'' not as in ''free beer.'' -- Richard Stallman _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am 13.09.2011 um 17:31 schrieb John Madden:>> Any advantage on using large luns+LVM instead of independent LUNs >> appart from snapshots? (according to Novell support LVM on top of LVM >> is a bad thing...). I remember reading that Xen itself implements some >> kind of locking... > > I think easier management is the key. If you''re already managing the SAN and assigning LUNs to your boxen, then managing multipath.conf across your cluster, it''s nice to only do that 4 times for a couple TB rather than once for each VM, for example. >I just want to add, what the iscsi-SCST guys suggest (from http://scst.svn.sourceforge.net/viewvc/scst/trunk/iscsi-scst/README?revision=3852&view=markup) 4. If you are going to use your target in an VM environment, for instance as a shared storage with VMware, make sure all your VMs connected to the target via *separate* sessions, i.e. each VM has own connection to the target, not all VMs connected using a single connection. You can check it using SCST proc or sysfs interface. If you miss it, you can greatly loose performance of parallel access to your target from different VMs. This isn''t related to the case if your VMs are using the same shared storage, like with VMFS, for instance. In this case all your VM hosts will be connected to the target via separate sessions, which is enough.> > John > > > > -- > John Madden / Sr UNIX Systems Engineer > Office of Technology / Ivy Tech Community College of Indiana > > Free Software is a matter of liberty, not price. To understand > the concept, you should think of Free as in ''free speech,'' not > as in ''free beer.'' -- Richard Stallman > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/9/13 John Madden <jmadden@ivytech.edu>:>> Any advantage on using large luns+LVM instead of independent LUNs >> appart from snapshots? (according to Novell support LVM on top of LVM >> is a bad thing...). I remember reading that Xen itself implements some >> kind of locking... > > I think easier management is the key. If you''re already managing the SAN > and assigning LUNs to your boxen, then managing multipath.conf across your > cluster, it''s nice to only do that 4 times for a couple TB rather than once > for each VM, for example. > > JohnCsync2 is your friend here :) Regards, -- Ciro Iriarte http://cyruspy.wordpress.com -- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 09/14/11 10:11, Christian Motschke wrote:> > Am 13.09.2011 um 17:31 schrieb John Madden: > >>> Any advantage on using large luns+LVM instead of independent LUNs >>> appart from snapshots? (according to Novell support LVM on top of LVM >>> is a bad thing...). I remember reading that Xen itself implements some >>> kind of locking... >> >> I think easier management is the key. If you''re already managing the SAN and assigning LUNs to your boxen, then managing multipath.conf across your cluster, it''s nice to only do that 4 times for a couple TB rather than once for each VM, for example. >> > I just want to add, what the iscsi-SCST guys suggest (from http://scst.svn.sourceforge.net/viewvc/scst/trunk/iscsi-scst/README?revision=3852&view=markup) > > 4. If you are going to use your target in an VM environment, for > instance as a shared storage with VMware, make sure all your VMs > connected to the target via *separate* sessions, i.e. each VM has own > connection to the target, not all VMs connected using a single > connection. You can check it using SCST proc or sysfs interface. If you > miss it, you can greatly loose performance of parallel access to your > target from different VMs. This isn''t related to the case if your VMs > are using the same shared storage, like with VMFS, for instance. In this > case all your VM hosts will be connected to the target via separate > sessions, which is enough.Hi, would this translate into using a seperate iSCSI target for each VM versus a seperate iSCSI LUN? thx, B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Viele Grüße. Christian Am 21.09.2011 um 11:14 schrieb Bart Coninckx <bart.coninckx@telenet.be>:> On 09/14/11 10:11, Christian Motschke wrote: >> >> Am 13.09.2011 um 17:31 schrieb John Madden: >> >>>> Any advantage on using large luns+LVM instead of independent LUNs >>>> appart from snapshots? (according to Novell support LVM on top of LVM >>>> is a bad thing...). I remember reading that Xen itself implements some >>>> kind of locking... >>> >>> I think easier management is the key. If you''re already managing the SAN and assigning LUNs to your boxen, then managing multipath.conf across your cluster, it''s nice to only do that 4 times for a couple TB rather than once for each VM, for example. >>> >> I just want to add, what the iscsi-SCST guys suggest (from http://scst.svn.sourceforge.net/viewvc/scst/trunk/iscsi-scst/README?revision=3852&view=markup) >> >> 4. If you are going to use your target in an VM environment, for >> instance as a shared storage with VMware, make sure all your VMs >> connected to the target via *separate* sessions, i.e. each VM has own >> connection to the target, not all VMs connected using a single >> connection. You can check it using SCST proc or sysfs interface. If you >> miss it, you can greatly loose performance of parallel access to your >> target from different VMs. This isn''t related to the case if your VMs >> are using the same shared storage, like with VMFS, for instance. In this >> case all your VM hosts will be connected to the target via separate >> sessions, which is enough. > > Hi, > > would this translate into using a seperate iSCSI target for each VM versus a seperate iSCSI LUN? >Yes, I think this is meant. But I have not tried it this way.> thx, > > > B._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users