I have several SLES 10 SP1 xen servers. Right now each one has it''s own internal disks, or has space on our SAN. I would like to be able to xm migrate VMs between my xen servers. Everything I''ve read so far talks about all the xen servers having network access to the VMs. Is it possible to use a SAN for this? We can assign the same vdisk to multiple hosts, but not sure what would be required on the dom0''s to all use the same disk for storage of their domU''s. We do something similar with oracle, where we have a cluster of linux machines, but from what I''ve gathered Oracle does something special allowing all the systems to share the data vdisk. I was not part of that installation, so I''m not sure exactly how it''s setup. Anyone know if sharing a virtual disk on a SAN is possible with xen? What''s required to make it work? Thanks, James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Anno domini 2008 James Pifer scripsit:> I have several SLES 10 SP1 xen servers. Right now each one has it''s own > internal disks, or has space on our SAN. I would like to be able to xm > migrate VMs between my xen servers. Everything I''ve read so far talks > about all the xen servers having network access to the VMs.> Is it possible to use a SAN for this? We can assign the same vdisk to > multiple hosts, but not sure what would be required on the dom0''s to all > use the same disk for storage of their domU''s.> We do something similar with oracle, where we have a cluster of linux > machines, but from what I''ve gathered Oracle does something special > allowing all the systems to share the data vdisk. I was not part of that > installation, so I''m not sure exactly how it''s setup.> Anyone know if sharing a virtual disk on a SAN is possible with xen? > What''s required to make it work?Sure. Have a look at CLVM and/or EVMS. If you are using image files file systems like GFS an OCFS2 maybe interesting, too. Ciao Max -- Follow the white penguin. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Maximilian Wilhelm schrieb:> Anno domini 2008 James Pifer scripsit: > >> I have several SLES 10 SP1 xen servers. Right now each one has it''s own >> internal disks, or has space on our SAN. I would like to be able to xm >> migrate VMs between my xen servers. Everything I''ve read so far talks >> about all the xen servers having network access to the VMs. > >> Is it possible to use a SAN for this? We can assign the same vdisk to >> multiple hosts, but not sure what would be required on the dom0''s to all >> use the same disk for storage of their domU''s. > >> We do something similar with oracle, where we have a cluster of linux >> machines, but from what I''ve gathered Oracle does something special >> allowing all the systems to share the data vdisk. I was not part of that >> installation, so I''m not sure exactly how it''s setup. > >> Anyone know if sharing a virtual disk on a SAN is possible with xen? >> What''s required to make it work? > > Sure. > Have a look at CLVM and/or EVMS. > > If you are using image files file systems like GFS an OCFS2 maybe > interesting, too. > > Ciao > MaxOr just give each domU a directly raw mapped volume on the SAN. This is the Solution here (using solaris nevada CE + sun xvm). Live migration of linux PV works quite fine. Florian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
We tested that too. It is overkill and degrades performance. Simply attach the raw lun''s to your domu ( /dev/disk/by-id ) and allow all you dom0''s to see them. live migration works perfectly then. A little warning : every lun should only be allowed to be accessed by 1 domu or dom0, unless the filesystem on it is able to handle multiple systems. Booting your domu on two systems on the same lun will result in massive datacorruption. We have been using this setup for over a year now and had no problems whatsoever - provided you check every domu has only been started once AND every lun is attached to only one domu ( typo''s can happen ). We''re using a simple script to check all that before we start a domu. There are also a lot of xen linux clustering projects around that can help you, just google them. Kindest regards, Peter. On Thursday 04 September 2008 16:34:39 Maximilian Wilhelm wrote:> Anno domini 2008 James Pifer scripsit: > > > I have several SLES 10 SP1 xen servers. Right now each one has it''s own > > internal disks, or has space on our SAN. I would like to be able to xm > > migrate VMs between my xen servers. Everything I''ve read so far talks > > about all the xen servers having network access to the VMs. > > > Is it possible to use a SAN for this? We can assign the same vdisk to > > multiple hosts, but not sure what would be required on the dom0''s to all > > use the same disk for storage of their domU''s. > > > We do something similar with oracle, where we have a cluster of linux > > machines, but from what I''ve gathered Oracle does something special > > allowing all the systems to share the data vdisk. I was not part of that > > installation, so I''m not sure exactly how it''s setup. > > > Anyone know if sharing a virtual disk on a SAN is possible with xen? > > What''s required to make it work? > > Sure. > Have a look at CLVM and/or EVMS. > > If you are using image files file systems like GFS an OCFS2 maybe > interesting, too. > > Ciao > Max-- Peter Van Biesen Sysadmin VAPH tel: +32 (0) 2 225 85 70 fax: +32 (0) 2 225 85 88 e-mail: peter.vanbiesen@vaph.be PGP: http://www.vaph.be/pgpkeys _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 8, 2008 at 10:23 AM, Peter Van Biesen <peter.vanbiesen@vaph.be> wrote:> We tested that too. It is overkill and degrades performance.do you mean GFS/OCFS, or CLVM/EVMS? the former have obviously higher overhead, the later shouldn''t.... it''s important because if you don''t have netapp-level storage subsystems with great administration, then you can do iSCSI/AoE/gnbd and CLVM, and get great $/GB results with still really good administrability. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Both. Degradation is inherent to the concept of shared storage. Either you trust the fact that you''re alone on a disk and use diskcaching or you''re not and you need to use some other mechanism to sync the changes amongst the participants. This can be via I/O or via the network, so you always have delay and bandwith usage. So, if you would compare ext3 on san disk to a gfs or clvm volume, you would get a performance hit. But, would you say, those are things you can''t compare as gfs en clvm does things an ext3 on a san disk can''t. True. But do we _need_ those things ? No. In fact, we do _not_want_ the ability to start the same domu several times on different machines. And by letting the domu use the san disk as a dedicated disk, it can use its diskcache without taking any other machines in to account. Even live migration is not a problem, as the memory ( and thus the disk cache ) just get migrated too ... I simply do not see the added value of using a clustered filesystem for a domu. And in that light, any additional overhead is too much. Why make things complex ? Complex setups have complex problems in my experience. We solved the problem with a very simple script, checking all cluster members to see if a specific domu was running or not. Does this weigh up to having to learn,administer,tune en possibly debug a clustered file system ? Lastly, I really don''t see the $/GB argument. A GB cost the same, although its a bit slower on a clustered filesystem, that''s all. Peter. Ps: nice line-up of acronyms, btw 8-) On Monday 08 September 2008 17:30:25 Javier Guerra wrote:> On Mon, Sep 8, 2008 at 10:23 AM, Peter Van Biesen > <peter.vanbiesen@vaph.be> wrote: > > We tested that too. It is overkill and degrades performance. > > do you mean GFS/OCFS, or CLVM/EVMS? > > the former have obviously higher overhead, the later shouldn''t.... > > it''s important because if you don''t have netapp-level storage > subsystems with great administration, then you can do iSCSI/AoE/gnbd > and CLVM, and get great $/GB results with still really good > administrability. >-- Peter Van Biesen Sysadmin VAPH tel: +32 (0) 2 225 85 70 fax: +32 (0) 2 225 85 88 e-mail: peter.vanbiesen@vaph.be PGP: http://www.vaph.be/pgpkeys Opgelet ! De domeinnaam van het Vlaams Agentschap is vanaf heden vaph.be. Dit betekent dat u uw correspondent kan bereiken via voornaam.naam@vaph.be. Gelieve aub dit aan te passen in uw adresboek. DISCLAIMER ------------------------------------------------------------------------------- De personeelsleden van het agentschap doen hun best om in e-mails betrouwbare informatie te geven. Toch kan niemand rechten doen gelden op basis van deze inhoud. Als in de e-mail een stellingname voorkomt, is dat niet noodzakelijk het standpunt van het agentschap. Rechtsgeldige beslissingen of officiele standpunten worden alleen per brief toegestuurd. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Sep 10, 2008 at 10:09 AM, Peter Van Biesen <peter.vanbiesen@vaph.be> wrote:> Both. > > Degradation is inherent to the concept of shared storage. Either you trust thein this case the degradation isn''t because the storage is shared; it''s because of the sync mechanisms to avoid stepping on the other machine''s toes. And there''s a world of difference between locking to access the volume partition (CLVM/EVMS-ha) and locking at file level (GFS/OCFS).> I simply do not see the added value of using a clustered filesystem for a domu. And in that light, any additional overhead is too much. Why make thingstotally agree but i don''t find CLVM overhead any worse than LVM alone. i asked because you seemed to advise against it, and wanted to know if that''s because of specific experience, or just against cluster filesystems.> Lastly, I really don''t see the $/GB argument. A GB cost the same, although its a bit slower on a clustered filesystem, that''s all.several not-so-big boxes with OpenFiler are A LOT cheaper than comparable capacity NetApp settings. the only drawback is that you can''t join/partition/migrate between boxes without help from the block-client boxes, thus using CLVM.> Peter. > Ps: nice line-up of acronyms, btw 8-)yep, OTOH, TANSTAAFL, so a11y and r9y are way down, AFAICT :-P -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wednesday 10 September 2008 17:43:09 Javier Guerra wrote:> On Wed, Sep 10, 2008 at 10:09 AM, Peter Van Biesen > <peter.vanbiesen@vaph.be> wrote: > > Both. > > > > Degradation is inherent to the concept of shared storage. Either you trust the > > in this case the degradation isn''t because the storage is shared; it''s > because of the sync mechanisms to avoid stepping on the other > machine''s toes. And there''s a world of difference between locking to > access the volume partition (CLVM/EVMS-ha) and locking at file level > (GFS/OCFS).Point taken. However, during live migration you still need two machines to be able to access the same volume at the same time. If this were possible, it would be an additional step you needed to do before migration - thus adding one more error you could make.> > I simply do not see the added value of using a clustered filesystem for a domu. And in that light, any additional overhead is too much. Why make things > > totally agree > > but i don''t find CLVM overhead any worse than LVM alone. i asked > because you seemed to advise against it, and wanted to know if that''s > because of specific experience, or just against cluster filesystems.I had bad experiences with it, but maybe I haven''t studied it long enough. I''m a bit reluctant to install things that span multiple machines. You can''t ''reset'' it without bringing all of them down.> > Lastly, I really don''t see the $/GB argument. A GB cost the same, although its a bit slower on a clustered filesystem, that''s all. > > several not-so-big boxes with OpenFiler are A LOT cheaper than > comparable capacity NetApp settings. the only drawback is that you > can''t join/partition/migrate between boxes without help from the > block-client boxes, thus using CLVM.I See. Then I misunderstood. Migration could be a problem indeed. We didn''t change storage system yet, but i suppose i will have to bring my domu''s down and dd the disks over to do that. CLVM would have fixed that yes. Maybe I should pick that up again. Thinking about it, this would greatly simplify the ''unused lun'' problem, mmmmm ... Another setup I was thinking about is starting every domu with a kernel en initrd for iscsi booting, so not using vbd at all. But maybe this would degrade performance far more than a clustered fs. And it adds a single point of failure, but facilitates migration if you use LVM on it.> > > Peter. > > Ps: nice line-up of acronyms, btw 8-) > > yep, OTOH, TANSTAAFL, so a11y and r9y are way down, AFAICT :-Pkindest regards, Peter. -- Peter Van Biesen Sysadmin VAPH tel: +32 (0) 2 225 85 70 fax: +32 (0) 2 225 85 88 e-mail: peter.vanbiesen@vaph.be PGP: http://www.vaph.be/pgpkeys _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > Degradation is inherent to the concept of shared storage. Either > you trust the > > > > in this case the degradation isn''t because the storage is shared; > it''s > > because of the sync mechanisms to avoid stepping on the other > > machine''s toes. And there''s a world of difference between locking > to > > access the volume partition (CLVM/EVMS-ha) and locking at file level > > (GFS/OCFS). > Point taken. However, during live migration you still need two > machines to be able to access the same volume at the same time. If > this were possible, it would be an additional step you needed to do > before migration - thus adding one more error you could make.CLVM doesn''t prevent two hosts from accessing the same LV at the same time, it just ensures that changes to the VG''s are coherent across the cluster (its locking is limited to configuration-related activities). Migration shouldn''t be an issue here. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Peter Van Biesen wrote on Sep 8, 2008:> We tested that too. It is overkill and degrades performance. > > Simply attach the raw lun''s to your domu ( /dev/disk/by-id ) and allow all you dom0''s to see them. live migration works perfectly then. > > A little warning : every lun should only be allowed to be accessed by 1 domu or dom0, unless the filesystem on it is able to handle multiple systems. Booting your domu on two systems on the same lun will result in massive datacorruption. > > We have been using this setup for over a year now and had no problems whatsoever - provided you check every domu has only been started once AND every lun is attached to only one domu ( typo''s can happen ). We''re using a simple script to check all that before we start a domu. There are also a lot of xen linux clustering projects around that can help you, just google them. > > Kindest regards, > > Peter.Hi - My SAN is able to provide up to 32 lun''s, which is quite insufficient for the number of domUs I need: I''ve got 5 dom0s, which I intent to run +/- 10 domUs per dom0s on. Some of them are databases with a dedicated logical drive on the SAN for the datas in additition to the root filesystem, which means 2 lun''s per domUs in that case. What could you suggest in that situation? Thanks in anticipation, -- Olivier Le Cam Education Headquaters, Versailles, France _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Olivier Le Cam <Olivier.LeCam@crdp.ac-versailles.fr> writes:> My SAN is able to provide up to 32 lun''s, which is quite > insufficient for the number of domUs I need: I''ve got 5 dom0s, which I > intent to run +/- 10 domUs per dom0s on. Some of them are databases > with a dedicated logical drive on the SAN for the datas in additition > to the root filesystem, which means 2 lun''s per domUs in that case. > > What could you suggest in that situation?Use clvm. That way 1 shared LU is enough for the whole cluster. -- Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Olivier Le Cam <Olivier.LeCam@crdp.ac-versailles.fr> writes: > > > My SAN is able to provide up to 32 lun''s, which is quite > > insufficient for the number of domUs I need: I''ve got 5 dom0s, whichI> > intent to run +/- 10 domUs per dom0s on. Some of them are databases > > with a dedicated logical drive on the SAN for the datas inadditition> > to the root filesystem, which means 2 lun''s per domUs in that case. > > > > What could you suggest in that situation? > > Use clvm. That way 1 shared LU is enough for the whole cluster.Does clvm support snapshots these days? I remember ages ago when I was testing I thought that the reason it wasn''t allowing me to create a snapshot was because I wasn''t doing something right, so I forced the issue. The result being a completely corrupted lv and a slightly wiser me. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"James Harper" <james.harper@bendigoit.com.au> writes:>> Olivier Le Cam <Olivier.LeCam@crdp.ac-versailles.fr> writes: >> >>> My SAN is able to provide up to 32 lun''s, which is quite >>> insufficient for the number of domUs I need: I''ve got 5 dom0s, >>> which I intent to run +/- 10 domUs per dom0s on. Some of them are >>> databases with a dedicated logical drive on the SAN for the datas >>> in additition to the root filesystem, which means 2 lun''s per >>> domUs in that case. >>> >>> What could you suggest in that situation? >> >> Use clvm. That way 1 shared LU is enough for the whole cluster. > > Does clvm support snapshots these days?Doesn''t look like so, see https://www.redhat.com/archives/linux-lvm/2008-October/msg00025.html and the followup. -- Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users