Hi list. I have the setup illustrated below. The app (application xen) servers will have live migration between them, and at any given time there can be added new app servers and more storage on the same storage server. I have exported the block devices from the shared storage with nbd. --------- --------- | app 1 | | app 2 | --------- --------- \ / \ / \ / ---------------- | shared storage | ---------------- I have a lot of troble to get the GFS to work on xenified kernel 2.6.16. Whereas I have no problem patching the xenified kernel with the evms patches. I haven''t tryed booting the kernel yet but it compiled fine. Do any one have tried any thing simular and with what result ? Can I change the setup to allow more storage servers for HA ? Does any one have any thoughts on this matter ? - Karsten _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thursday 25 May 2006 10:55 am, Karsten Nielsen wrote:> I have a lot of troble to get the GFS to work on xenified kernel 2.6.16. > Whereas I have no problem patching the xenified kernel with the evms > patches. I haven''t tryed booting the kernel yet but it compiled fine.the choice isn''t between GFS and EVMS, because they are at different levels of any solution. EVMS is a volume manager, like LVM; while GFS is a filesystem (like ext3 or reiser, but cluster-aware) in your case, i would use a volume manager (CLVM or EVMS) to manage the storage and split into logical volumes, and give them to the Xen domUs. no need to use any filesystem layer between the VBDs and the storage server.> Can I change the setup to allow more storage servers for HA ?using DRBD you can mirror the storage server transparently to the app servers. if you add more storage servers (or DRBD pairs of servers), just add to the volume group and all app servers would get more storage to either add more logical volumes or extend existing ones. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra wrote:> On Thursday 25 May 2006 10:55 am, Karsten Nielsen wrote: >> I have a lot of troble to get the GFS to work on xenified kernel 2.6.16. >> Whereas I have no problem patching the xenified kernel with the evms >> patches. I haven''t tryed booting the kernel yet but it compiled fine. > > the choice isn''t between GFS and EVMS, because they are at different levels of > any solution. EVMS is a volume manager, like LVM; while GFS is a filesystem > (like ext3 or reiser, but cluster-aware)Is it important to use a cluster-aware filesystem in my case. I can imagine that the setup, at some point, may have heartbeat implementet between so if one domU on fx. domO-1 gets inrespontant the domU gets started on dom0-2. But at no point will more than one of the same domU will be running. How flexible is this setup ? Can I create a cluster from this setup ? And does that apply GFS ?> in your case, i would use a volume manager (CLVM or EVMS) to manage the > storage and split into logical volumes, and give them to the Xen domUs. no > need to use any filesystem layer between the VBDs and the storage server.Which volume manager is perferable to the other ? CLVM implies locking mekanisme ? Can EVMS do the same ?>> Can I change the setup to allow more storage servers for HA ? > > using DRBD you can mirror the storage server transparently to the app servers. > if you add more storage servers (or DRBD pairs of servers), just add to the > volume group and all app servers would get more storage to either add more > logical volumes or extend existing ones.If I want to use DRBD can I start out with only 1 storage server or do I need 2 storage servers ? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thursday 25 May 2006 11:40 am, Karsten Nielsen wrote:> Is it important to use a cluster-aware filesystem in my case. I can > imagine that the setup, at some point, may have heartbeat implementet > between so if one domU on fx. domO-1 gets inrespontant the domU gets > started on dom0-2. But at no point will more than one of the same domU > will be running.as i see it, it''s important to use cluster-aware tools, but not necessarily a filesystem. if you can do live migration, any heartbeat daemon should be able to trigger the migration.> Which volume manager is perferable to the other ? CLVM implies locking > mekanisme ? Can EVMS do the same ?there are three options for a volume manager: plain LVM: works ok on a cluster, but you have to bring it down to do any modification to the volume group. CLVM: the same as LVM, but uses the GFS lock manager to allow online administration. the on-disk layout, device-mapper and everything else is exactly the same as plain LVM, no extra overhead, nothing. doesn''t depend on a running GFS, just the lock manager and fencing. EVMS: more generic than LVM, the administration utilities handle the whole stack: physical devices, partitions, RAID, volume groups, logical volumes, even filesystems. it''s plugin architecture lets it manage md (for RAID), dm (device mapper, used by LVM), and filesystems. can use LVM2 disk layout, and coexist with LVM2. uses Linux-HA for heartbeat, fencing and locking, becoming fully cluster-aware. personally, i like more the LVM approach to do just one thing (join devices and split them into logical volumes) and do it well, but the theory and design of EVMS seems better and cleaner. note that to turn plain LVM into CLVM, you have to install the GFS packages and join the xen app servers to a cluster; there''s no need to create a GFS filesystem; just create the LVs> If I want to use DRBD can I start out with only 1 storage server or do I > need 2 storage servers ?i don''t have experience on this, but i guess you could add the mirror after it''s already working. since DRBD also uses Linux-HA for the heartbeat, i guess it would be nicer to use it too to make EVMS cluster-aware, even if the LVs are LVM2-compatible. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thank you for your thoughts on this subject. I will try to setup DRDB on my shared storage and EVMS on my app servers. This way I will keep my setup as clean as posible regarding patching and maintains. - Karsten Javier Guerra wrote:> On Thursday 25 May 2006 11:40 am, Karsten Nielsen wrote: >> Is it important to use a cluster-aware filesystem in my case. I can >> imagine that the setup, at some point, may have heartbeat implementet >> between so if one domU on fx. domO-1 gets inrespontant the domU gets >> started on dom0-2. But at no point will more than one of the same domU >> will be running. > > as i see it, it''s important to use cluster-aware tools, but not necessarily a > filesystem. if you can do live migration, any heartbeat daemon should be > able to trigger the migration. > >> Which volume manager is perferable to the other ? CLVM implies locking >> mekanisme ? Can EVMS do the same ? > > there are three options for a volume manager: > > plain LVM: works ok on a cluster, but you have to bring it down to do any > modification to the volume group. > > CLVM: the same as LVM, but uses the GFS lock manager to allow online > administration. the on-disk layout, device-mapper and everything else is > exactly the same as plain LVM, no extra overhead, nothing. doesn''t depend on > a running GFS, just the lock manager and fencing. > > EVMS: more generic than LVM, the administration utilities handle the whole > stack: physical devices, partitions, RAID, volume groups, logical volumes, > even filesystems. it''s plugin architecture lets it manage md (for RAID), dm > (device mapper, used by LVM), and filesystems. can use LVM2 disk layout, and > coexist with LVM2. uses Linux-HA for heartbeat, fencing and locking, > becoming fully cluster-aware. > > personally, i like more the LVM approach to do just one thing (join devices > and split them into logical volumes) and do it well, but the theory and > design of EVMS seems better and cleaner. > > note that to turn plain LVM into CLVM, you have to install the GFS packages > and join the xen app servers to a cluster; there''s no need to create a GFS > filesystem; just create the LVs > >> If I want to use DRBD can I start out with only 1 storage server or do I >> need 2 storage servers ? > > i don''t have experience on this, but i guess you could add the mirror after > it''s already working. > > since DRBD also uses Linux-HA for the heartbeat, i guess it would be nicer to > use it too to make EVMS cluster-aware, even if the LVs are LVM2-compatible. > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 5/25/06, Karsten Nielsen <karsten-xen@foo-bar.dk> wrote:> Hi list. > > I have the setup illustrated below. The app (application xen) servers > will have live migration between them, and at any given time there can > be added new app servers and more storage on the same storage server. I > have exported the block devices from the shared storage with nbd. > > --------- --------- > | app 1 | | app 2 | > --------- --------- > \ / > \ / > \ / > ---------------- > | shared storage | > ---------------- > > I have a lot of troble to get the GFS to work on xenified kernel 2.6.16. > Whereas I have no problem patching the xenified kernel with the evms > patches. I haven''t tryed booting the kernel yet but it compiled fine.Regarding patching you might want to go with ocfs2 instead of GFS since it is already in kernel 2.6.16. -- Lars Roland _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Lars Roland wrote:> On 5/25/06, Karsten Nielsen <karsten-xen@foo-bar.dk> wrote: >> Hi list. >> >> I have the setup illustrated below. The app (application xen) servers >> will have live migration between them, and at any given time there can >> be added new app servers and more storage on the same storage server. I >> have exported the block devices from the shared storage with nbd. >> >> --------- --------- >> | app 1 | | app 2 | >> --------- --------- >> \ / >> \ / >> \ / >> ---------------- >> | shared storage | >> ---------------- >> >> I have a lot of troble to get the GFS to work on xenified kernel 2.6.16. >> Whereas I have no problem patching the xenified kernel with the evms >> patches. I haven''t tryed booting the kernel yet but it compiled fine. > > Regarding patching you might want to go with ocfs2 instead of GFS > since it is already in kernel 2.6.16.I have tried that at it worked fine, but the problem is that apt-get in debian uses mmap and ocfs2 does not support that. So I am affried that that is not an option. - Karsten _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
There is an easy solution for that (rpm in Mandriva has the same problem). Create a file on your OCFS2 filsystem, let''s say 500M. Setup loop device and create ext3 fs inside it. Use is as a second disk in your DomU then and mount it into /var/lib/rpm or whatever Debian uses for apt-get. Not the best one, but it works for me with NFS shared device for DomUs. NFS doesn''t support mmap as well On 5/25/06, Karsten Nielsen <karsten-xen@foo-bar.dk> wrote:> Lars Roland wrote: > > On 5/25/06, Karsten Nielsen <karsten-xen@foo-bar.dk> wrote: > >> Hi list. > >> > >> I have the setup illustrated below. The app (application xen) servers > >> will have live migration between them, and at any given time there can > >> be added new app servers and more storage on the same storage server. I > >> have exported the block devices from the shared storage with nbd. > >> > >> --------- --------- > >> | app 1 | | app 2 | > >> --------- --------- > >> \ / > >> \ / > >> \ / > >> ---------------- > >> | shared storage | > >> ---------------- > >> > >> I have a lot of troble to get the GFS to work on xenified kernel 2.6.16. > >> Whereas I have no problem patching the xenified kernel with the evms > >> patches. I haven''t tryed booting the kernel yet but it compiled fine. > > > > Regarding patching you might want to go with ocfs2 instead of GFS > > since it is already in kernel 2.6.16. > > I have tried that at it worked fine, but the problem is that apt-get in > debian uses mmap and ocfs2 does not support that. So I am affried that > that is not an option. > > - Karsten > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> problem). Create a file on your OCFS2 filsystem, let''s say 500M. SetupA lot more software than apt uses mmap(). Live migration doesn''t require any of this stuff to work -- use fewer, simpler components when possible. Simplify and consolidate your storage first, everything else will follow. For example, why not use DRDB to mirror (for HA) a storage system and provision it to your Xen machines via iSCSI? John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I cannot diviate from the setup of a shared storage server - which can export the block devices via nbd to my app servers. So what you suggest is that I use DRDB to mirror those nbd devices between the to app servers ? Or how would yot do it ? - Karsten John Madden wrote:>> problem). Create a file on your OCFS2 filsystem, let''s say 500M. Setup > > A lot more software than apt uses mmap(). > > Live migration doesn''t require any of this stuff to work -- use fewer, > simpler components when possible. Simplify and consolidate your storage > first, everything else will follow. > > For example, why not use DRDB to mirror (for HA) a storage system and > provision it to your Xen machines via iSCSI? > > John > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
El jue, 25-05-2006 a las 18:40 +0200, Karsten Nielsen escribió:> Javier Guerra wrote: > > On Thursday 25 May 2006 10:55 am, Karsten Nielsen wrote: > >> I have a lot of troble to get the GFS to work on xenified kernel 2.6.16. > >> Whereas I have no problem patching the xenified kernel with the evms > >> patches. I haven''t tryed booting the kernel yet but it compiled fine. > > > > the choice isn''t between GFS and EVMS, because they are at different levels of > > any solution. EVMS is a volume manager, like LVM; while GFS is a filesystem > > (like ext3 or reiser, but cluster-aware) > > Is it important to use a cluster-aware filesystem in my case. I canI think not. You need a cluster system, but no a cluster filesystem. At one time, the filesystem will be used just from app server. If this server dies, heartbeat will have to mount the filesystem in the other system an migrate the domU. -- Angel L. Mateo Martínez Sección de Telemática Área de Tecnologías de la Información _o) y las Comunicaciones Aplicadas (ATICA) / \\ http://www.um.es/atica _(___V Tfo: 968367590 Fax: 968398337 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 5/25/06, Karsten Nielsen <karsten@foo-bar.dk> wrote:> > I cannot diviate from the setup of a shared storage server - which can > export the block devices via nbd to my app servers. > > So what you suggest is that I use DRDB to mirror those nbd devices > between the to app servers ? Or how would yot do it ? > > - Karsten > > I don''t think Novell or RH are making recommendations fo Xen clusteringyet. For physical machines IIRC: RH supports nbd/md to mirror drives between 2 computers on their enterprise products. SLES supports drbd which does the whole job. I''ve not heard of trying to mix nbd with drbd, but if you are implying you have 3 computers (or VMs), a nbd server and 2 clients, it may be the way to go. Would you be running the nbd server on dom0 and the clients on domU? Greg -- Greg Freemyer The Norcross Group Forensics for the 21st Century _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I cannot diviate from the setup of a shared storage server - which can > export the block devices via nbd to my app servers. > > So what you suggest is that I use DRDB to mirror those nbd devices > between the to app servers ? Or how would yot do it ?I suggest you focus on redundancy of your storage system (potentially by using DRDB) and export block devices however you see fit. The point is that you don''t need cluster filesystems to do Xen domain migration. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users