I''m sure someone asked in this list if they can run the same kernel in xen0 and xenU and that the answer was yes, but now I can''t find it - did I imagine it? Presumably all I need to do is compile the kernel with frontend and backend drivers and xen will sort the rest out itself right? What disadvantages are there to this approach apart from kernel bloat? Are there any optimisations made in the kernel compile if it is front- or back-end? In the testing phase, especially where the xen api is changing a lot, one of the things I find hard is when I build a new kernel, I also need to recompile iscsi etc. Having to only do this once would make it easier. Which brings up another point, how often these days is xen changing such that a xenU domain won''t migrate between domains if one is todays build and another is yesterdays build? Finally, is anyone working on a shared memory filesystem that can be exported by xen0 and read by xenU, and would be available very early on in the boot cycle? I ask because updating modules is one of the hard things to do in xenU on boot when the xenU filesystem isn''t readily mountable by xen0. My workaround is to put all the xenU modules in the initrd and copy them out of that on boot into the live filesystem before they are required. Or at least it will be once I get it going. Is anyone interested in a quick howto on that assuming it works well? Thanks James ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I''m sure someone asked in this list if they can run the same kernel in > xen0 and xenU and that the answer was yes, but now I can''t find it - did > I imagine it? > > Presumably all I need to do is compile the kernel with frontend and > backend drivers and xen will sort the rest out itself right? What > disadvantages are there to this approach apart from kernel bloat? Are > there any optimisations made in the kernel compile if it is front- or > back-end?There''s no downside to using a xen0 kernel in other domains, apart from a bit of extra bloat and a slightly longer boot time.> Which brings up another point, how often these days is xen changing such > that a xenU domain won''t migrate between domains if one is todays build > and another is yesterdays build?There haven''t been any API changes for some time.> Finally, is anyone working on a shared memory filesystem that can be > exported by xen0 and read by xenU, and would be available very early on > in the boot cycle? I ask because updating modules is one of the hard > things to do in xenU on boot when the xenU filesystem isn''t readily > mountable by xen0.We''ve reasonably well thought out plans for something like this, but its a way down the todo list. NFS isn''t ideal, but it works well enough.> My workaround is to put all the xenU modules in the initrd and copy them > out of that on boot into the live filesystem before they are required. > Or at least it will be once I get it going. Is anyone interested in a > quick howto on that assuming it works well?That''s not a bad solution, though I tend to avoid initrd''s as too much hassle. Even when I''m using iscsi/gnbd I tend to do the setup in dom0 then export the device to the other domain as its rootfs. This works fine for migration, providing both dom0''s have the devices imported. Ian ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> There''s no downside to using a xen0 kernel in other domains, > apart from a bit of extra bloat and a slightly longer boot time.That being true then, is there any particular reason why we have separate kernels?> > Which brings up another point, how often these days is xen changingsuch> > that a xenU domain won''t migrate between domains if one is todaysbuild> > and another is yesterdays build? > > There haven''t been any API changes for some time.That''s good to know.> > My workaround is to put all the xenU modules in the initrd and copythem> > out of that on boot into the live filesystem before they arerequired.> > Or at least it will be once I get it going. Is anyone interested ina> > quick howto on that assuming it works well? > > That''s not a bad solution, though I tend to avoid initrd''s as too > much hassle. Even when I''m using iscsi/gnbd I tend to do the > setup in dom0 then export the device to the other domain as its > rootfs. This works fine for migration, providing both dom0''s have > the devices imported.I''m finding them a bit of hassle too. If I did it your way, it would forgo a lot of the mess which would be nice!!! The initrd makes the boot process work well but most of that is only a requirement of bootstrapping iscsi. Do you have any opinion on how best to organise it? Currently I have 1 iscsi target (running linux) and 2 xen physical hosts. The target currently exports lvm logical volumes which the xenU domains see as physical disks with a partition table etc. This works well within the domains but accessing them for maintenance outside is a right pain. How do you import the partitions into Dom0 such that they can be exported into DomU? Do you run into problems if multiple physical machines see the same iscsi disks? What if multiple physical machines see the same volume group? Thanks James ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> That being true then, is there any particular reason why we have > separate kernels?Simply because (with default settings) the XenU kernel is 30% smaller.> > There haven''t been any API changes for some time. > > That''s good to know.After the 2.0 release, we''d seek to keep APIs / ABIs stable and just fix bugs or add minor features that don''t tread on anything pre-existing.> Do you have any opinion on how best to organise it? Currently I have 1 > iscsi target (running linux) and 2 xen physical hosts. The target > currently exports lvm logical volumes which the xenU domains see as > physical disks with a partition table etc. This works well within the > domains but accessing them for maintenance outside is a right pain.In principle, dom0 should be able to export VBDs to itself, then you could see the partitions inside. I don''t know if this works at the moment but it seems doable... Anyone tried this?> How do you import the partitions into Dom0 such that they can be > exported into DomU? Do you run into problems if multiple physical > machines see the same iscsi disks? What if multiple physical machines > see the same volume group?I''d import the iSCSI disks in dom0 and then export that device as if it were a physical device. i.e. if you''ve imported to dom0 as /dev/foobar then put export ''phy:/dev/foobar,/dev/target_dev,w'' in the domain config file. This could be automated using a shell script (as for file disks and nbd disks) if you feel like saving time, then you could just have ''iscsi:host:whatever,/dev/target_dev,w''... There''s examples under /etc/xen/scripts/ but we can help you out. HTH, Mark> Thanks > > James > > > ------------------------------------------------------- This SF.net > email is sponsored by: IT Product Guide on ITManagersJournal Use IT > products in your business? Tell us what you think of them. Give us Your > Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more > http://productguide.itmanagersjournal.com/guidepromo.tmpl > _______________________________________________ Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> > There''s no downside to using a xen0 kernel in other domains, > > apart from a bit of extra bloat and a slightly longer boot time. > > That being true then, is there any particular reason why we have > separate kernels?There was a time when a xen0 kernel wouldn''t work if it was running in a non privileged domain. It now correctly handles privilege violations and continues. Some people still like a lean-and-mean stripped down kernel...> Do you have any opinion on how best to organise it? Currently I have 1 > iscsi target (running linux) and 2 xen physical hosts. The target > currently exports lvm logical volumes which the xenU domains see as > physical disks with a partition table etc. This works well within the > domains but accessing them for maintenance outside is a right pain.LVM seems to work well for carving up the disk space, but I''ve just switched over to gnbd for exporting it to my client machines. Actually, all of my clients are also servers, and IBERIA run both gnbd clients and servers on each to allow transparent access to LVM partitions across the cluster. I''ve been meaning to knock up a xend block device script that auto imports the devices, optimising the case where the device is local. I guess I''ll have the syntax as gnbd:hostname/device but they''ll need to be some convention for creating gnbd export names, such as hostname-device.> How do you import the partitions into Dom0 such that they can > be exported into DomU? Do you run into problems if multiple > physical machines see the same iscsi disks? What if multiple > physical machines see the same volume group?Having multiple machines connect to the same iscsi or gnbd target seems to work fine. Obviously, you should make sure that the target is only mounted from one place at a time (unless you''re using a cluster file systems like ocfs2). Ian ------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel