Hey chaps. What''s the best way to populate an lvm to get it ready for xen? Steps I''ve taken so far; * create file based images using another machine. * create physical group and volume * create logical volume All checks out so far... acid:~# lvscan ACTIVE ''/dev/xenLabs/etch_lvm'' [4.00 GB] inherit ACTIVE ''/dev/xenLabs/etch_swap'' [128.00 MB] inherit ACTIVE ''/dev/xenLabs/blag_lvm0'' [2.00 GB] inherit ACTIVE ''/dev/xenLabs/blag_swap0'' [128.00 MB] inherit So far so good. * mount /dev/xenLabs/etch_lvm /mnt/testing * mkfs.ext3 /dev/xenLabs/etch_lvm * mkswap /dev/xenLabs/etch_swap * debootstrap /dev/xenLabs/etch_lvm OK, still with me? Config file; cat /etc/xen/etch_lvm # -*- mode: python; -*- kernel = "/boot/vmlinuz-2.6.16-xen" memory = 128 name = "etch_lvm" vif = [ '''' ] disk = [ ''phy:xenLabs/etch_lvm,sda1,w'', ''phy:xenLabs/etch_swap,sda2,w''] dhcp="dhcp" hostname= "etch_lvm" root = "/dev/sda1 ro" extra = "4" vnc= ''yes'' xm start etch_lvm => backend device not found. Am I missing something? Perhaps I''m not using dd correctly to fill up the lvm. wrong switches used for cp perhaps? tried dd /path/to/image /path/to/lvm cp -dpR /path/to/image /path/to/lvm xen-3.0.4-1 -- John Maclean - 07739 171 531 MSc (DIC) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
john maclean wrote:> Hey chaps. What''s the best way to populate an lvm to get it ready for > xen? > > Steps I''ve taken so far; > * create file based images using another machine. > * create physical group and volume > * create logical volume > > All checks out so far... > acid:~# lvscan > ACTIVE ''/dev/xenLabs/etch_lvm'' [4.00 GB] inherit > ACTIVE ''/dev/xenLabs/etch_swap'' [128.00 MB] inherit > ACTIVE ''/dev/xenLabs/blag_lvm0'' [2.00 GB] inherit > ACTIVE ''/dev/xenLabs/blag_swap0'' [128.00 MB] inherit > > So far so good. > * mount /dev/xenLabs/etch_lvm /mnt/testing > * mkfs.ext3 /dev/xenLabs/etch_lvm > * mkswap /dev/xenLabs/etch_swap > * debootstrap /dev/xenLabs/etch_lvm > > OK, still with me? Config file; > cat /etc/xen/etch_lvm > # -*- mode: python; -*- > kernel = "/boot/vmlinuz-2.6.16-xen" > memory = 128 > name = "etch_lvm" > vif = [ '''' ] > disk = [ ''phy:xenLabs/etch_lvm,sda1,w'', ''phy:xenLabs/etch_swap,sda2,w''] > dhcp="dhcp" > hostname= "etch_lvm" > root = "/dev/sda1 ro" > extra = "4" > vnc= ''yes'' > > xm start etch_lvm => backend device not found. Am I missing something? > Perhaps I''m not using dd correctly to fill up the lvm. wrong switches > used for cp perhaps? > > tried dd /path/to/image /path/to/lvm > cp -dpR /path/to/image /path/to/lvm > > xen-3.0.4-1 >First: unless you have really compelling need, do *NOT* use lvm inside of Xen guests. Seriously, you''re just adding another layer of management and configuration problems you don''t need. Keep your LVM work in Dom0. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nico Kadel-Garcia wrote:> First: unless you have really compelling need, do *NOT* use lvm inside > of Xen guests. Seriously, you''re just adding another layer of > management and configuration problems you don''t need. Keep your LVM > work in Dom0.There''s no real difficulty in management, the issue becomes mounting of DomU LVM partitions from within Dom0. I''ve tried it both ways and like being able to mount the DomU partitions inside Dom0 easily, but pv resizing vs. ext3 resizing wasn''t a big deal and performance barely changed at all. That said, I''d love to be able to have the DomUs detect partition size changes without reboots of the DomU or even nicer -- have the LVM handling be passed through to Dom0 directly instead of handled as emulated partitions. That is to say, /dev/DomU_VG/logs could be configured to pass through to Dom0''s /dev/main/xen_DomU_logs directly ... but I''ve never played with the LVM internals so I''m not sure how likely that is to ever happen. -- Michael T. Babcock http://mikebabcock.ca _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Apr 30, 2007, at 11:29 AM, john maclean wrote:> Hey chaps. What''s the best way to populate an lvm to get it ready > for xen? > > Steps I''ve taken so far; > * create file based images using another machine. > * create physical group and volume > * create logical volume > > All checks out so far... > acid:~# lvscan > ACTIVE ''/dev/xenLabs/etch_lvm'' [4.00 GB] inherit > ACTIVE ''/dev/xenLabs/etch_swap'' [128.00 MB] inherit > ACTIVE ''/dev/xenLabs/blag_lvm0'' [2.00 GB] inherit > ACTIVE ''/dev/xenLabs/blag_swap0'' [128.00 MB] inherit > > So far so good. > * mount /dev/xenLabs/etch_lvm /mnt/testing > * mkfs.ext3 /dev/xenLabs/etch_lvm > * mkswap /dev/xenLabs/etch_swap > * debootstrap /dev/xenLabs/etch_lvm > > OK, still with me? Config file; > cat /etc/xen/etch_lvm > # -*- mode: python; -*- > kernel = "/boot/vmlinuz-2.6.16-xen" > memory = 128 > name = "etch_lvm" > vif = [ '''' ] > disk = [ ''phy:xenLabs/etch_lvm,sda1,w'', ''phy:xenLabs/ > etch_swap,sda2,w'']You might have better luck using the full path to the /dev files here... disk = [ ''phy:/dev/xenLabs/etch_lmv,xvda1,w'', ''phy:/dev/xenLabs/ etch_swap,xvda2,w'' ] I switched sdaN to xvdaN, though it shouldn''t matter which you use. I am using LVM on dom0 to manage domU''s disks with no problems. --jason _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Just added the full paths to the lvms. No difference made. should i also change root = "/dev/sda1 ro" to something like root = "xvda ro" ? On 30/04/07, Jason Dillon <jason@planet57.com> wrote:> On Apr 30, 2007, at 11:29 AM, john maclean wrote: > > Hey chaps. What''s the best way to populate an lvm to get it ready > > for xen? > > > > Steps I''ve taken so far; > > * create file based images using another machine. > > * create physical group and volume > > * create logical volume > > > > All checks out so far... > > acid:~# lvscan > > ACTIVE ''/dev/xenLabs/etch_lvm'' [4.00 GB] inherit > > ACTIVE ''/dev/xenLabs/etch_swap'' [128.00 MB] inherit > > ACTIVE ''/dev/xenLabs/blag_lvm0'' [2.00 GB] inherit > > ACTIVE ''/dev/xenLabs/blag_swap0'' [128.00 MB] inherit > > > > So far so good. > > * mount /dev/xenLabs/etch_lvm /mnt/testing > > * mkfs.ext3 /dev/xenLabs/etch_lvm > > * mkswap /dev/xenLabs/etch_swap > > * debootstrap /dev/xenLabs/etch_lvm > > > > OK, still with me? Config file; > > cat /etc/xen/etch_lvm > > # -*- mode: python; -*- > > kernel = "/boot/vmlinuz-2.6.16-xen" > > memory = 128 > > name = "etch_lvm" > > vif = [ '''' ] > > disk = [ ''phy:xenLabs/etch_lvm,sda1,w'', ''phy:xenLabs/ > > etch_swap,sda2,w''] > > You might have better luck using the full path to the /dev files here... > > disk = [ ''phy:/dev/xenLabs/etch_lmv,xvda1,w'', ''phy:/dev/xenLabs/ > etch_swap,xvda2,w'' ] > > I switched sdaN to xvdaN, though it shouldn''t matter which you use. > > I am using LVM on dom0 to manage domU''s disks with no problems. > > --jason > >-- John Maclean - 07739 171 531 MSc (DIC) -- John Maclean - 07739 171 531 MSc (DIC) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > That said, I''d love to be able to have the DomUs detect partition size > changes without reboots of the DomUWe do use LVM inside domU''s for that, the SAN provides fixed size LUN''s, which is transparantly mapped through into the domU using domU-name as prefix for the groups. It does have some minor annoyances when performing live migration (ie. your source & target dom0 must be -aware- of the LVM, so don''t forget the rescan for LVM groups). Other than that, I don''t see why not recommending it, as long as your dom0 does not use the LVM, there are no issues with it... and you enable hot disk space adding :-)> or even nicer -- have the LVM > handling be passed through to Dom0 directly instead of handled as > emulated partitions. That is to say, /dev/DomU_VG/logs could be > configured to pass through to Dom0''s /dev/main/xen_DomU_logs directly > ... but I''ve never played with the LVM internals so I''m not sure how > likely that is to ever happen.You''re referring to internal "sharing" of an LVM from a dom0 towards a domU with all its options, right? That''s an interesting concept. I wonder how it would handle on security levels though, I think you''d have to go beyond the mere "passing-through" of block devices, possibly mixing things making it too complex to handle. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"Tijl Van den Broeck" <subspawn@gmail.com> writes:>> >> That said, I''d love to be able to have the DomUs detect partition size >> changes without reboots of the DomU > > We do use LVM inside domU''s for that, the SAN provides fixed size > LUN''s, which is transparantly mapped through into the domU using > domU-name as prefix for the groups. It does have some minor annoyances > when performing live migration (ie. your source & target dom0 must be > -aware- of the LVM, so don''t forget the rescan for LVM groups). Other > than that, I don''t see why not recommending it, as long as your dom0 > does not use the LVM, there are no issues with it... and you enable > hot disk space adding :-) > > > >> or even nicer -- have the LVM >> handling be passed through to Dom0 directly instead of handled as >> emulated partitions. That is to say, /dev/DomU_VG/logs could be >> configured to pass through to Dom0''s /dev/main/xen_DomU_logs directly >> ... but I''ve never played with the LVM internals so I''m not sure how >> likely that is to ever happen. > > You''re referring to internal "sharing" of an LVM from a dom0 towards a > domU with all its options, right? That''s an interesting concept. I > wonder how it would handle on security levels though, I think you''d > have to go beyond the mere "passing-through" of block devices, > possibly mixing things making it too complex to handle.If you have enough disks/partitions you can make one VG per domU and share that for dom0 and domU using cluster lvm (clvm) to keep them in sync. Instead of real disks/partitions you can run lvm in dom0, create a LV per domU and run lvm on it again. But then you have to specifically include the first lvm LVs in lvm.conf and run vgscan twice. I think that should work. We use lvm inside LVs here on a HA cluster but the inside lvm is exclusively for the domU. No sharing with dom0. The dom0 lvm is so we can resize and create/destroy LVs on the fly, the inside lvm is because all our servers are setup the same. MfG Goswin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Goswin von Brederlow wrote:> If you have enough disks/partitions you can make one VG per domU and > share that for dom0 and domU using cluster lvm (clvm) to keep them in > sync. Instead of real disks/partitions you can run lvm in dom0, create > a LV per domU and run lvm on it again. But then you have to > specifically include the first lvm LVs in lvm.conf and run vgscan > twice. I think that should work. >Have you tried this? For various reasons, I''m looking at using iscsi or clvm for having commong storage and permitting live migrations, and for snapshotting the LVM''s for backup purposes. (LVM snapshots and rsnapshot, a combination made in heaven!)> We use lvm inside LVs here on a HA cluster but the inside lvm is > exclusively for the domU. No sharing with dom0. The dom0 lvm is so we > can resize and create/destroy LVs on the fly, the inside lvm is > because all our servers are setup the same. >Unfortunately for me, my Dom0 hardware does not have the CPU''s for full virtualization, so access to backup media has to go through Dom0 or be pushed over the network. And the network or Xen usage seems to double my backup timef, so I *need* Dom0 to be able to access the hard drive contents of DomU. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nico Kadel-Garcia <nkadel@gmail.com> writes:> Goswin von Brederlow wrote: >> If you have enough disks/partitions you can make one VG per domU and >> share that for dom0 and domU using cluster lvm (clvm) to keep them in >> sync. Instead of real disks/partitions you can run lvm in dom0, create >> a LV per domU and run lvm on it again. But then you have to >> specifically include the first lvm LVs in lvm.conf and run vgscan >> twice. I think that should work. >> > Have you tried this? For various reasons, I''m looking at using iscsi > or clvm for having commong storage and permitting live migrations, and > for snapshotting the LVM''s for backup purposes. (LVM snapshots and > rsnapshot, a combination made in heaven!)I was thinking about drbd0.8 active-active with clvm but drbd0.8 caused crashes. And drbd0.7 can''t do active-active. But with iscsi and clvm there should be no problem. clvm is just a little daemon that basically runs vgscan on all hosts whenever you change something.>> We use lvm inside LVs here on a HA cluster but the inside lvm is >> exclusively for the domU. No sharing with dom0. The dom0 lvm is so we >> can resize and create/destroy LVs on the fly, the inside lvm is >> because all our servers are setup the same. >> > Unfortunately for me, my Dom0 hardware does not have the CPU''s for > full virtualization, so access to backup media has to go through Dom0 > or be pushed over the network. And the network or Xen usage seems to > double my backup timef, so I *need* Dom0 to be able to access the hard > drive contents of DomU.You can always create LVs in dom0 and export them as sda1/2/3/... to domU. You have to reboot domU to resize anyway. That way you can then snapshot them in dom0, mount the snapshot and run your backup. MfG Goswin. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Goswin von Brederlow wrote:> Nico Kadel-Garcia <nkadel@gmail.com> writes: > > >> Goswin von Brederlow wrote: >> >>> If you have enough disks/partitions you can make one VG per domU and >>> share that for dom0 and domU using cluster lvm (clvm) to keep them in >>> sync. Instead of real disks/partitions you can run lvm in dom0, create >>> a LV per domU and run lvm on it again. But then you have to >>> specifically include the first lvm LVs in lvm.conf and run vgscan >>> twice. I think that should work. >>> >>> >> Have you tried this? For various reasons, I''m looking at using iscsi >> or clvm for having commong storage and permitting live migrations, and >> for snapshotting the LVM''s for backup purposes. (LVM snapshots and >> rsnapshot, a combination made in heaven!) >> > > I was thinking about drbd0.8 active-active with clvm but drbd0.8 > caused crashes. And drbd0.7 can''t do active-active. > > But with iscsi and clvm there should be no problem. clvm is just a > little daemon that basically runs vgscan on all hosts whenever you > change something. > > >>> We use lvm inside LVs here on a HA cluster but the inside lvm is >>> exclusively for the domU. No sharing with dom0. The dom0 lvm is so we >>> can resize and create/destroy LVs on the fly, the inside lvm is >>> because all our servers are setup the same. >>> >>> >> Unfortunately for me, my Dom0 hardware does not have the CPU''s for >> full virtualization, so access to backup media has to go through Dom0 >> or be pushed over the network. And the network or Xen usage seems to >> double my backup timef, so I *need* Dom0 to be able to access the hard >> drive contents of DomU. >> > > You can always create LVs in dom0 and export them as sda1/2/3/... to > domU. You have to reboot domU to resize anyway. That way you can then > snapshot them in dom0, mount the snapshot and run your backup. >I do that now, but I can''t do a raw install that way using virt-install or virt-manager, so doing that virst install is a pain in the neck to get a working image. The jailtime.org images are good, but not so useful for non-RPM based OS''s. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nico Kadel-Garcia <nkadel@gmail.com> writes:> Goswin von Brederlow wrote: >> You can always create LVs in dom0 and export them as sda1/2/3/... to >> domU. You have to reboot domU to resize anyway. That way you can then >> snapshot them in dom0, mount the snapshot and run your backup. >> > I do that now, but I can''t do a raw install that way using > virt-install or virt-manager, so doing that virst install is a pain in > the neck to get a working image. The jailtime.org images are good, but > not so useful for non-RPM based OS''s.In debian there is xen-create-image that you can tell a lvm VG and it creates its own root and swap volumes there and installs debian. After that you just have to add additional volumes and move data as needed. Neat tool. MfG Goswin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Goswin von Brederlow wrote:> Nico Kadel-Garcia <nkadel@gmail.com> writes: > > >> Goswin von Brederlow wrote: >> >>> You can always create LVs in dom0 and export them as sda1/2/3/... to >>> domU. You have to reboot domU to resize anyway. That way you can then >>> snapshot them in dom0, mount the snapshot and run your backup. >>> >>> >> I do that now, but I can''t do a raw install that way using >> virt-install or virt-manager, so doing that virst install is a pain in >> the neck to get a working image. The jailtime.org images are good, but >> not so useful for non-RPM based OS''s. >> > > In debian there is xen-create-image that you can tell a lvm VG and it > creates its own root and swap volumes there and installs debian. After > that you just have to add additional volumes and move data as needed. > > Neat tool. >That''s precisely what virt-manager and virt-install do. This does not solve my difficulty of accessing the contents of the LVM partitions from Dom0 to do backkup. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
did you create partition in your LV ? I mean you added something like "phy:/dev/main/xen_DomU_logs,hda,w" when you''ve installed your distribution did you create a partition hda1 in the installation process or did you just copy some data into hda ? what''s the distro you''ve installed ? On 5/6/07, Nico Kadel-Garcia <nkadel@gmail.com> wrote:> > Goswin von Brederlow wrote: > > Nico Kadel-Garcia <nkadel@gmail.com> writes: > > > > > >> Goswin von Brederlow wrote: > >> > >>> You can always create LVs in dom0 and export them as sda1/2/3/... to > >>> domU. You have to reboot domU to resize anyway. That way you can then > >>> snapshot them in dom0, mount the snapshot and run your backup. > >>> > >>> > >> I do that now, but I can''t do a raw install that way using > >> virt-install or virt-manager, so doing that virst install is a pain in > >> the neck to get a working image. The jailtime.org images are good, but > >> not so useful for non-RPM based OS''s. > >> > > > > In debian there is xen-create-image that you can tell a lvm VG and it > > creates its own root and swap volumes there and installs debian. After > > that you just have to add additional volumes and move data as needed. > > > > Neat tool. > > > That''s precisely what virt-manager and virt-install do. This does not > solve my difficulty of accessing the contents of the LVM partitions from > Dom0 to do backkup. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- René Jr Purcell Chargé de projet, sécurité et sytèmes Techno Centre Logiciels Libres, http://www.tc2l.ca/ Téléphone : (418) 681-2929 #124 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rene Purcell wrote:> did you create partition in your LV ? I mean you added something like > "phy:/dev/main/xen_DomU_logs,hda,w" when you''ve installed your > distribution did you create a partition hda1 in the installation > process or did you just copy some data into hda ? > > what''s the distro you''ve installed ?I''m working with RedHat and CentOS. The installer itself insists on having a local disk device to load up and install a boot loader, and I haven''t worked out the details of a pitiful excuse for documentation of the Xen config files to figure out how to gracefully over-ride the use of pygrub. The result is that I have a disk image, not a partition image or set of partition images, where an LVM partition claled /dev/XEN/xenguest1 will be seen by the guest domain as /dev/xvda, and have internal partitions of /dev/xdva1 /dev/xvda[whatever]. I *want* to be able to snapshot the LVM partitions mount the snapshots, and run backup operations against those on Dom0 instead of paying the overhead of running them from DomU. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''m not sure to understand where''s your snapshot are... but there''s how to mount an "internal" partition available in your LV.. if you shutdown your DomU then mount the partition you''ll be able to access your data in your Dom0.. - hda (physical disk) hda1 ( / Dom0 ) hda2 ( swap Dom0 ) hda3 (partition system id=8e LVM) VG1 LV1 DomU (hda virtuel XEN) hda1 ext3 / hda2 swap There''s a partition table in : /dev/vg1/lv1 Whith this command you can see the partition contained in your LV ----------------------------------------------------------------------------- fdisk -l -u /dev/vg1/lv1 "Don''t forget the "-u" it''s important this will show you the size in sector" Disk /dev/vg1/lv1: 4294 MB, 4294967296 bytes 255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors Units = sectors of 1 * 512 = 512 bytes Device Boot Start End Blocks Id System /dev/vg1/lv1p1 63 899639 449788+ 82 Linux swap / Solaris /dev/vg1/lv1p2 * 899640 8385929 3743145 83 Linux ----------------------------------------------------------------------------- Now if you want to mount the second partition in your LV type: mount -o loop,offset=460615680 /dev/vg1/lv1p2 /mnt 460615680 is equal to 899640 * 512 (512 are the size of each sector and 899640 the starting point of the partition. (You can get these numbers with the fdisk command..) Thanks to Jean-Francois Saucier ;) for the recipe! And I hope this can help you.. @+ On 5/8/07, Nico Kadel-Garcia <nkadel@gmail.com> wrote:> > Rene Purcell wrote: > > did you create partition in your LV ? I mean you added something like > > "phy:/dev/main/xen_DomU_logs,hda,w" when you''ve installed your > > distribution did you create a partition hda1 in the installation > > process or did you just copy some data into hda ? > > > > what''s the distro you''ve installed ? > I''m working with RedHat and CentOS. The installer itself insists on > having a local disk device to load up and install a boot loader, and I > haven''t worked out the details of a pitiful excuse for documentation of > the Xen config files to figure out how to gracefully over-ride the use > of pygrub. The result is that I have a disk image, not a partition image > or set of partition images, where an LVM partition claled > /dev/XEN/xenguest1 will be seen by the guest domain as /dev/xvda, and > have internal partitions of /dev/xdva1 /dev/xvda[whatever]. > > I *want* to be able to snapshot the LVM partitions mount the snapshots, > and run backup operations against those on Dom0 instead of paying the > overhead of running them from DomU. >-- René Jr Purcell Chargé de projet, sécurité et sytèmes Techno Centre Logiciels Libres, http://www.tc2l.ca/ Téléphone : (418) 681-2929 #124 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 4/30/07, john maclean <jayeola@gmail.com> wrote:> Hey chaps. What''s the best way to populate an lvm to get it ready for xen?If you can live with not having the root partition of the guest as a LVM then it is easy to get full LVM in the guest. Just install the guest as usual then do somthing like this in dom0: -- dd if=/dev/zero of=/var/xen/domains/guest/sdd.img bs=1M count=5120 -- add the disk image to the guests xen config file -- disk = [...,''file:/var/xen/domains/guest/sdd.img,sdd,w'' ] -- shutdown and start the guest, so it uses the new disk image -- xm shutdown guest xm create guest -- Finally create volume group "vg" and logical volume "data", by executing the following in the guest: -- cfdisk /dev/sdd # create sdd1 and set its type to: "Linux LVM" reboot pvcreate /dev/sdd1 vgcreate vg /dev/sdd1 lvcreate -L 1G -n data vg mkfs.xfs /dev/vg/data mount /dev/mapper/vg-data /mnt/data -- That should be it -- Lars Roland _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users