I''ve been using lvm under centos to create the backing store for domUs and although the performance seems acceptable it has some shortcomings. The biggest of which is the LVM bug which prevents me from removing an lv (it says it is still mounted and it definitely isnt). I thought this was just a centos bug but it appears to be evident in debian and ubuntu too and I really can''t afford a reboot of the dom0 everytime I want to remove a logical volume that was once accessed but now isn''t! What are people''s experiences with using qcow or qcow2 images over LVM volumes? The reason I chose LVM was for it''s ease of management and relative ease with which you can shrink and enlarge a domU filesystem. Can you do this with qcow? How does it perform on a dom0 with many running domUs (ie. dozens) ? Thanks, Matt. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thiago Camargo Martins Cordeiro
2009-Dec-21 13:04 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
You can "ummount/free" the "not mounted"/busy LVMs LVs with dmsetup! ;-) Anyway, I agree this is sucks! Are you using kpartx to map the LVs partitioned partitions to your dom0? If yes, try to not partition (with fdisk) a LV volume... Just use the LVs within your domUs as it is... Cheers! Thiago 2009/12/21 Matthew Law <matt@webcontracts.co.uk>> I''ve been using lvm under centos to create the backing store for domUs and > although the performance seems acceptable it has some shortcomings. The > biggest of which is the LVM bug which prevents me from removing an lv (it > says it is still mounted and it definitely isnt). I thought this was just > a centos bug but it appears to be evident in debian and ubuntu too and I > really can''t afford a reboot of the dom0 everytime I want to remove a > logical volume that was once accessed but now isn''t! > > What are people''s experiences with using qcow or qcow2 images over LVM > volumes? The reason I chose LVM was for it''s ease of management and > relative ease with which you can shrink and enlarge a domU filesystem. > Can you do this with qcow? How does it perform on a dom0 with many > running domUs (ie. dozens) ? > > > Thanks, > > Matt. > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Florian Gleixner
2009-Dec-21 13:26 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
Matthew Law wrote:> although the performance seems acceptable it has some shortcomings. The > biggest of which is the LVM bug which prevents me from removing an lv (it > says it is still mounted and it definitely isnt). I thought this was justI''ve read somewhere that this is a bug in udev scripts. Stopping or restarting udev should do the trick. I''ve had the problem once ago and rebooted. So i cannot say if this really works. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Yes, I''ve tried the dmsetup solution and that doesn''t seem to work for me and I couldn''t get PVGrub to work unless I used kpartx. Maybe my mistake but that was what I ended up using to get everything working. I see so many potential filesystem issues with Linux that I am seriously considering moving over to opensolaris despite it not being considered production ready yet. ZFS brings a lot of features for a small RAM overhead and some CPU usage, but given that it does what we currently do with expensive RAID adapters, it might be worth it. The other thing with OpenSolaris is they seem to be actively developing xen/xvm at a time when most of the linux distros seem to be dropping it in favour of kvm. I don''t want kvm. I want Xen :-) Regards, Matt. On Mon, December 21, 2009 1:04 pm, Thiago Camargo Martins Cordeiro wrote:> You can "ummount/free" the "not mounted"/busy LVMs LVs with dmsetup! ;-) > Anyway, I agree this is sucks! > > Are you using kpartx to map the LVs partitioned partitions to your dom0? > If > yes, try to not partition (with fdisk) a LV volume... Just use the LVs > within your domUs as it is... > > Cheers! > Thiago > > 2009/12/21 Matthew Law <matt@webcontracts.co.uk> > >> I''ve been using lvm under centos to create the backing store for domUs >> and >> although the performance seems acceptable it has some shortcomings. The >> biggest of which is the LVM bug which prevents me from removing an lv >> (it >> says it is still mounted and it definitely isnt). I thought this was >> just >> a centos bug but it appears to be evident in debian and ubuntu too and I >> really can''t afford a reboot of the dom0 everytime I want to remove a >> logical volume that was once accessed but now isn''t! >> >> What are people''s experiences with using qcow or qcow2 images over LVM >> volumes? The reason I chose LVM was for it''s ease of management and >> relative ease with which you can shrink and enlarge a domU filesystem. >> Can you do this with qcow? How does it perform on a dom0 with many >> running domUs (ie. dozens) ? >> >> >> Thanks, >> >> Matt. >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-usersMatthew Law Director Penguin Consulting Services Limited M: XXXXXXXXXXXXX E: matt@webcontracts.co.uk www.webcontracts.co.uk This email is from Penguin Consulting Services Limited. The contents of this email and any attachments are confidential to the intended recipient. They may not be disclosed to, used by, or copied in any way by anyone other than the intended recipient. If this email is received in error, inform the sender immediately and destroy this email. Neither Penguin Consulting Services nor the sender accepts any responsibility for viruses and it is your responsibility to scan or otherwise check this email and any attachments. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Yes, I''ve tried the dmsetup solution and that doesn''t seem to work for me and I couldn''t get PVGrub to work unless I used kpartx. Maybe my mistake but that was what I ended up using to get everything working. I see so many potential filesystem issues with Linux that I am seriously considering moving over to opensolaris despite it not being considered production ready yet. ZFS brings a lot of features for a small RAM overhead and some CPU usage, but given that it does what we currently do with expensive RAID adapters, it might be worth it. The other thing with OpenSolaris is they seem to be actively developing xen/xvm at a time when most of the linux distros seem to be dropping it in favour of kvm. I don''t want kvm. I want Xen :-) Regards, Matt. On Mon, December 21, 2009 1:04 pm, Thiago Camargo Martins Cordeiro wrote:> You can "ummount/free" the "not mounted"/busy LVMs LVs with dmsetup! ;-) > Anyway, I agree this is sucks! > > Are you using kpartx to map the LVs partitioned partitions to your dom0? > If > yes, try to not partition (with fdisk) a LV volume... Just use the LVs > within your domUs as it is... > > Cheers! > Thiago_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks for the suggestion. I''ve seen lots of examples of this on google and it would seem to be a known and as yet unaddressed bug. The kernel seems to be incorrecly tracking the number of LVs which are currently open. I can reproduce it on CentOS 5.4 by creating a new LV, adding a disklabel and paritioning it as linux-swap and then running mkswap on it. After that and even though it has never been mounted it cannot be removed without first rebooting the dom0. dmsetup claims it is open by one thing but lsof, fuse, mount, etc do not show it. Restarting udevd has no effect on this :-( Now I am considering two options: 1) If this issue is confined to LVs used for swap disks, perhaps I can switch to using disk files for domU swap space..? 2) If this isn''t confined to LVs used for swap, then perhaps I could have an opensolaris domU export zvols across NFS or iSCSI back to the dom0 and use these for each domU system and swap disk? - this sounds a little crazy to me and performance and load might be unacceptable too. Are there any other options available? Thanks, Matt. On Mon, December 21, 2009 1:26 pm, Florian Gleixner wrote:> I''ve read somewhere that this is a bug in udev scripts. Stopping or > restarting udev should do the trick. I''ve had the problem once ago and > rebooted. So i cannot say if this really works._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Grant McWilliams
2009-Dec-23 16:56 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Mon, Dec 21, 2009 at 4:54 AM, Matthew Law <matt@webcontracts.co.uk>wrote:> I''ve been using lvm under centos to create the backing store for domUs and > although the performance seems acceptable it has some shortcomings. The > biggest of which is the LVM bug which prevents me from removing an lv (it > says it is still mounted and it definitely isnt). I thought this was just > a centos bug but it appears to be evident in debian and ubuntu too and I > really can''t afford a reboot of the dom0 everytime I want to remove a > logical volume that was once accessed but now isn''t! > > What are people''s experiences with using qcow or qcow2 images over LVM > volumes? The reason I chose LVM was for it''s ease of management and > relative ease with which you can shrink and enlarge a domU filesystem. > Can you do this with qcow? How does it perform on a dom0 with many > running domUs (ie. dozens) ? > > > Thanks, > > Matt. >I''m going to chime in on the Qcow2 portion of your question - it doesn''t work. Depending on which version of Xen you have you will have different bugs and even if you can get some form of Qcow to work it doesn''t support a backing store so it''s basically worthless. I''m considering moving 40 VMs to KVM just because I ran into a thousand brick walls with Qcow and Xen. I want to use Xen but at the end of the day I just need to get my work done. If you''re looking to do COW you can use dmsetup to and snapshots to layer read-only disks with write-only disks but you''ll just run into the same LV bug that you''ve been battling. It''s sad that we don''t have real COW support in a hypervisor as powerful as Xen. Apparently it''s just not that important. I wanted to have an image in ramdisk and set up a snapshot of that as read only with the writes going to disk for performance reasons but if you try to boot a DomU off an image in ram it will screw up your machine and only a reboot will fix it. You can''t even start ANY other domains until the reboot. Or at least I couldn''t find any. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, December 23, 2009 4:56 pm, Grant McWilliams wrote:> I''m going to chime in on the Qcow2 portion of your question - it doesn''t > work. Depending on which version of Xen you have you will have different > bugs and even if you can get some form of Qcow to work it doesn''t support > a > backing store so it''s basically worthless. I''m considering moving 40 VMs > to > KVM just because I ran into a thousand brick walls with Qcow and Xen. I > want > to use Xen but at the end of the day I just need to get my work done. > > If you''re looking to do COW you can use dmsetup to and snapshots to layer > read-only disks with write-only disks but you''ll just run into the same LV > bug that you''ve been battling. It''s sad that we don''t have real COW > support > in a hypervisor as powerful as Xen. Apparently it''s just not that > important. > > I wanted to have an image in ramdisk and set up a snapshot of that as read > only with the writes going to disk for performance reasons but if you try > to > boot a DomU off an image in ram it will screw up your machine and only a > reboot will fix it. You can''t even start ANY other domains until the > reboot. > Or at least I couldn''t find any.What does kvm buy you? Does it have a working cow backing store? I did some more tests last night and it isn''t just the swap LVs that are affected -its all of them. So I can create, grow or shrink an LV but not remove it without a reboot. That is a huge limiter and has really screwed me up. My next plan is to use opensolaris or nexenta and export zvols over iSCSI. That brings with it a bunch of unknowns along with extra cost and complexity. It should offer better snapshots than LVM, thin provisioning, deduplication and better data protection, however, so that''s why I''m going to give it a try before I am forced to abandon Xen entirely. Matt _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Grant McWilliams
2009-Dec-24 09:58 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Thu, Dec 24, 2009 at 1:32 AM, Matthew Law <matt@webcontracts.co.uk>wrote:> > On Wed, December 23, 2009 4:56 pm, Grant McWilliams wrote: > > I''m going to chime in on the Qcow2 portion of your question - it doesn''t > > work. Depending on which version of Xen you have you will have different > > bugs and even if you can get some form of Qcow to work it doesn''t support > > a > > backing store so it''s basically worthless. I''m considering moving 40 VMs > > to > > KVM just because I ran into a thousand brick walls with Qcow and Xen. I > > want > > to use Xen but at the end of the day I just need to get my work done. > > > What does kvm buy you? Does it have a working cow backing store? > Matt > >KVM supports Qcow2 natively. It may be possible to use Qcow2 with a Xen HVM DomU but I''ve never tried it. Qcow is a part of Qemu and Xen HVM uses a version of Qemu. KVM practically IS QEMU with acceleration. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Dec-24 12:51 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Wed, Dec 23, 2009 at 8:31 PM, Matthew Law <matt@webcontracts.co.uk> wrote:> Now I am considering two options: > > 1) If this issue is confined to LVs used for swap disks, perhaps I can > switch to using disk files for domU swap space..?That ... depends. Generally, performance-wise files will not be as good as block device (partition, LVM, etc.) That being said, if you correctly predict domUs resource assignement so that swapping rarely occures (it kills performance anyway), and it''s just a safety net against OOM, you probably won''t notice the performance difference.> > 2) If this isn''t confined to LVs used for swap, then perhaps I could have > an opensolaris domU export zvols across NFS or iSCSI back to the dom0 and > use these for each domU system and swap disk? - this sounds a little crazy > to me and performance and load might be unacceptable too.That would work. Sun even sells iscsi SAN server based on zfs, called Sun Unified Storage Systems. Note however that in my test, even on the same server, zfs + zvol performance is lower compared to LVM. Add to that iscsi and network overhead. Whether or not it''s acceptable depends on your requirement, so it''s best to try it yourself.> > Are there any other options available?If you don''t care about space saving (I seem to recall you mentioned snapshot in another thread), you can just simply use the "disk" directly on domU as swap. That is, you assign two disks to domU, one of them for filesystem, the other as swap. Don''t label the swap disk, don''t create partitions, just use mkswap directly. In my case I assign it directly as partition (h/s/xvda1, xvda2, etc) but you can assign it as disk so it would work better with GUI tools (virt-install/virt-manager). Another thing to note if you use LVM snapshot, if somehow you let the snapshot fill to 100%, you might lose data. That''s why I only use snapshots for temporary purposes. It might not be a problem if you can guarantee that it will always be below 100% (perhaps with some monitoring/alert system), but IMHO it''s not worth it. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, December 24, 2009 12:51 pm, Fajar A. Nugraha wrote:> That ... depends. > Generally, performance-wise files will not be as good as block device > (partition, LVM, etc.) That being said, if you correctly predict domUs > resource assignement so that swapping rarely occures (it kills > performance anyway), and it''s just a safety net against OOM, you > probably won''t notice the performance difference.Thanks. I was hoping that the lvremove bug was only evident on the LVs formatted as swap (my thinking was that perhaps the kernel was keeping some internal pointer to it once ''mkswap'' had been run on it) but it occurs for me on all LVs. I could have tolerated a file for swap since for most domUs it wouldn''t and shouldn''t get much use.> That would work. Sun even sells iscsi SAN server based on zfs, called > Sun Unified Storage Systems. > Note however that in my test, even on the same server, zfs + zvol > performance is lower compared to LVM. Add to that iscsi and network > overhead. Whether or not it''s acceptable depends on your requirement, > so it''s best to try it yourself.There is a serious issue with iSCSI performance on opensolaris which, if I understand it properly, is down to the way that ZFS + COMSTAR must commit every write (i.e. is has to be synchronous) for NFS and iSCSI clients. Its a serious hit and the workarounds don''t sound attractive either. You wouldn''t see this if the dom0 was on OpenSolaris - I''m testing that now (albeit with SXCE build 129 rather than osol).> If you don''t care about space saving (I seem to recall you mentioned > snapshot in another thread), you can just simply use the "disk" > directly on domU as swap. That is, you assign two disks to domU, one > of them for filesystem, the other as swap. Don''t label the swap disk, > don''t create partitions, just use mkswap directly. In my case I assign > it directly as partition (h/s/xvda1, xvda2, etc) but you can assign it > as disk so it would work better with GUI tools > (virt-install/virt-manager). > > Another thing to note if you use LVM snapshot, if somehow you let the > snapshot fill to 100%, you might lose data. That''s why I only use > snapshots for temporary purposes. It might not be a problem if you can > guarantee that it will always be below 100% (perhaps with some > monitoring/alert system), but IMHO it''s not worth it.I do care about space saving but would trade a little for flexibility and performance. The dom0 in question has 8 disks on an Adaptec 5805Z controller split into a 2 disk RAID1 volume for the OS and a 6 disk RAID6 volume for domU storage and vm images. I don''t want to use individual disks or create scads of partitions for each domU. I use snapshots to take backups but not as a means of thin provisioning domUs. IMHO LVM snapshots aren''t up to that job. I don''t use any of the gui tools. I have a bunch of shell scripts to provision domUs by creating an LV, formatting it, mounting it and untarring the template image into it. Could it be this that is causing the problem? Should I switch to some other method? Thanks, Matt. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Dec-24 14:24 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Thu, Dec 24, 2009 at 9:04 PM, Matthew Law <matt@webcontracts.co.uk> wrote:> I do care about space saving but would trade a little for flexibility and > performance.Good to hear that. So at least we can rule out LVM snapshot problems. At least for now.> The dom0 in question has 8 disks on an Adaptec 5805Z > controller split into a 2 disk RAID1 volume for the OS and a 6 disk RAID6 > volume for domU storage and vm images. I don''t want to use individual > disks or create scads of partitions for each domU.I use LVM for domU disks as well. Never had the problem you mentioned.> I have a bunch of shell scripts to > provision domUs by creating an LV, formatting it, mounting it and > untarring the template image into it.That''s basically what I do. I use RHEL 5.4, some with builtin Xen, others with Gitco''s Xen rpm. You mentioned about disklabels and partitions. Does that mean you use kpartx? Did you remember to delete the mappings later using "kpartx -d"? I seriously suspect your problem is related to kpartx. Try changing your setup a little bit so that it maps LVs as partition instead of disks. Something like this on your domU config file: disk = [ ''phy:/dev/vg/rootlv,xvda1,w'', ''phy:/dev/vg/swaplv,xvda2,w'', ] you could use s/hda1 instead of xvda1, if your existing domU already use that. There shouldn''t be any change necessary to your domU tar image (including fstab or initramfs) if you don''t use LVM on domU side as well. After a successful series of lvcreate - mkfs - mount - untar - unmount - xm create - xm destroy - lvremove, that should at least narrow down your problem. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks for the reply - interesting that you don''t have the same problem as I do. Maybe it is an error or ommission in my scripts? Here are the commands that are run to provision a domU called ''myvm'' with 10GB and 512MB of swap (dom0 is CentOS 5.4 with xen 3.4.1 from the Gitco repositories): # domU backing store lvcreate -C y -L 10G -n myvm VolGroupVM parted /dev/VolGroupVM/myvm mklabel msdos parted /dev/VolGroupVM/myvm mkpartfs primary ext2 0 1024000 kpartx -p "" -av /dev/VolGroupVM/myvm tune2fs -j /dev/mapper/myvm1 kpartx -d /dev/VolGroupVM/myvm # domU swap lvcreate -C y -L 512 -n myvm-swap VolGroupVM parted /dev/VolGroupVM/myvm-swap mklabel msdos parted /dev/VolGroupVM/myvm-swap mkpartfs primary linux-swap 0 512 mkswap /dev/mapper/VolGroupVM-myvm--swapp1 lvscan | sort | grep ${NAME} Then it is basically a case of (from memory): mount -t ext3 /dev/mapper/myvm1 /mnt/vminstall tar -xzf /home/vmimages/image.tar.gz -C /mnt/vminstall umount /mnt/vminstall And the domu config file looks like this: # -*- mode: python; -*- name = "myvm" maxmem = 512 memory = 512 vcpus = 1 disk = [ "phy:/dev/VolGroupVM/myvm,xvda,w", "phy:/dev/VolGroupVM/myvm-swap,xvdb,w" ] vif = [ "mac=ae:00:59:15:1a:0b,ip=192.168.1.11,vifname=vifmyvm0" ] kernel = "/usr/lib/xen/boot/pv-grub-x86_32.gz" extra = "(hd0,0)/boot/grub/menu.lst" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" I have cobbled together these commands based on what I have read at various places. Have I screwed up somewhere? Many thanks, Matt. On Thu, December 24, 2009 2:24 pm, Fajar A. Nugraha wrote:> On Thu, Dec 24, 2009 at 9:04 PM, Matthew Law <matt@webcontracts.co.uk> > wrote: >> I do care about space saving but would trade a little for flexibility >> and >> performance. > > Good to hear that. So at least we can rule out LVM snapshot problems. > At least for now. > >> The dom0 in question has 8 disks on an Adaptec 5805Z >> controller split into a 2 disk RAID1 volume for the OS and a 6 disk >> RAID6 >> volume for domU storage and vm images. I don''t want to use individual >> disks or create scads of partitions for each domU. > > I use LVM for domU disks as well. Never had the problem you mentioned. > >> I have a bunch of shell scripts to >> provision domUs by creating an LV, formatting it, mounting it and >> untarring the template image into it. > > That''s basically what I do. I use RHEL 5.4, some with builtin Xen, > others with Gitco''s Xen rpm. > You mentioned about disklabels and partitions. Does that mean you use > kpartx? Did you remember to delete the mappings later using "kpartx > -d"? > > I seriously suspect your problem is related to kpartx. Try changing > your setup a little bit so that it maps LVs as partition instead of > disks. Something like this on your domU config file: > > disk = [ > ''phy:/dev/vg/rootlv,xvda1,w'', > ''phy:/dev/vg/swaplv,xvda2,w'', > ] > > you could use s/hda1 instead of xvda1, if your existing domU already > use that. There shouldn''t be any change necessary to your domU tar > image (including fstab or initramfs) if you don''t use LVM on domU side > as well. > > After a successful series of lvcreate - mkfs - mount - untar - unmount > - xm create - xm destroy - lvremove, that should at least narrow down > your problem._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Dec-24 21:58 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Thu, Dec 24, 2009 at 11:56 PM, Matthew Law <matt@webcontracts.co.uk> wrote:> # domU backing store > lvcreate -C y -L 10G -n myvm VolGroupVM > parted /dev/VolGroupVM/myvm mklabel msdos > parted /dev/VolGroupVM/myvm mkpartfs primary ext2 0 1024000 > kpartx -p "" -av /dev/VolGroupVM/myvm > tune2fs -j /dev/mapper/myvm1 > kpartx -d /dev/VolGroupVM/myvmhmm ... kpartx -d part looks OK. I use something like lvcreate mkfs.ext3 no parted/kpartx involved, since the filesystem and swap is directly on the LV> > # domU swap > lvcreate -C y -L 512 -n myvm-swap VolGroupVM > parted /dev/VolGroupVM/myvm-swap mklabel msdos > parted /dev/VolGroupVM/myvm-swap mkpartfs primary linux-swap 0 512 > mkswap /dev/mapper/VolGroupVM-myvm--swapp1 > lvscan | sort | grep ${NAME}what, no "kpartx" here? Not even "kpartx -a"? How did you get /dev/mapper/VolGroupVM-myvm--swapp1, was it automatically created by parted without the need to manually run kpartx?> Then it is basically a case of (from memory): > > mount -t ext3 /dev/mapper/myvm1 /mnt/vminstall > tar -xzf /home/vmimages/image.tar.gz -C /mnt/vminstall > umount /mnt/vminstalllooks good.> I have cobbled together these commands based on what I have read at > various places. Have I screwed up somewhere?I''d say the biggest difference from my setup is that you use partitions with parted and kpartx. I haven''t tested that, but it might be your source of problem. Try partionless setup that I mentioned earier to narrow down the source of problem. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''ve just done some tests on this and you are absolutely right: with no partitions it works fine, so either parted, kpartx or whatever is causing the problem. Now my problem is that I can''t run a partitionless volume because I absolutely need to use pvgrub and that doesn''t seem to work without a proper partition table in place or I have screwed up somewhere? Many thanks, Matt. On Thu, December 24, 2009 9:58 pm, Fajar A. Nugraha wrote:> On Thu, Dec 24, 2009 at 11:56 PM, Matthew Law <matt@webcontracts.co.uk> > wrote: >> # domU backing store >> lvcreate -C y -L 10G -n myvm VolGroupVM >> parted /dev/VolGroupVM/myvm mklabel msdos >> parted /dev/VolGroupVM/myvm mkpartfs primary ext2 0 1024000 >> kpartx -p "" -av /dev/VolGroupVM/myvm >> tune2fs -j /dev/mapper/myvm1 >> kpartx -d /dev/VolGroupVM/myvm > > hmm ... kpartx -d part looks OK. I use something like > > lvcreate > mkfs.ext3 > > no parted/kpartx involved, since the filesystem and swap is directly on > the LV > >> >> # domU swap >> lvcreate -C y -L 512 -n myvm-swap VolGroupVM >> parted /dev/VolGroupVM/myvm-swap mklabel msdos >> parted /dev/VolGroupVM/myvm-swap mkpartfs primary linux-swap 0 512 >> mkswap /dev/mapper/VolGroupVM-myvm--swapp1 >> lvscan | sort | grep ${NAME} > > what, no "kpartx" here? Not even "kpartx -a"? How did you get > /dev/mapper/VolGroupVM-myvm--swapp1, was it automatically created by > parted without the need to manually run kpartx? > >> Then it is basically a case of (from memory): >> >> mount -t ext3 /dev/mapper/myvm1 /mnt/vminstall >> tar -xzf /home/vmimages/image.tar.gz -C /mnt/vminstall >> umount /mnt/vminstall > > looks good. > >> I have cobbled together these commands based on what I have read at >> various places. Have I screwed up somewhere? > > I''d say the biggest difference from my setup is that you use > partitions with parted and kpartx. I haven''t tested that, but it might > be your source of problem. Try partionless setup that I mentioned > earier to narrow down the source of problem._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Dec-29 22:04 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Tue, Dec 29, 2009 at 11:56 PM, Matthew Law <matt@webcontracts.co.uk> wrote:> I''ve just done some tests on this and you are absolutely right: with no > partitions it works fine, so either parted, kpartx or whatever is causing > the problem. Now my problem is that I can''t run a partitionless volume > because I absolutely need to use pvgrub and that doesn''t seem to workYes, that seems to be the case (at least with 3.4.1)> without a proper partition table in place or I have screwed up somewhere?You might be able to compile xen-unstable and take its pvgrub. Should work. http://old.nabble.com/-xen-unstable--pvgrub:-Allow-to-work-with-a-partitionless-virtual-disc.-p19609135.html Another way to is to investigate why your earlier setup has problems. To eliminate partition problems, you can map the disk to dom0 like this: modprobe xenblk xm block-attach 0 phy:/dev/vg_name/lv_name xvda w ### do your stuff here. fdisk xvda, mkfs, ta, whatever. Use fdisk instead of parted. ### don''t forget to umount afterwards xm block-list 0 xm block-detach 0 51712 <== 51712 is the devid for xvda If that works, then it''s 100% confirmed the problem is with parted/kpartx. Repeat the test, but this time using parted instead of fdisk, and you get the idea :D -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, December 29, 2009 10:04 pm, Fajar A. Nugraha wrote:> Another way to is to investigate why your earlier setup has problems. > To eliminate partition problems, you can map the disk to dom0 like > this: > > modprobe xenblk > xm block-attach 0 phy:/dev/vg_name/lv_name xvda w > ### do your stuff here. fdisk xvda, mkfs, ta, whatever. Use fdisk > instead of parted. > ### don''t forget to umount afterwards > xm block-list 0 > xm block-detach 0 51712 <== 51712 is the devid for xvda > > If that works, then it''s 100% confirmed the problem is with > parted/kpartx. Repeat the test, but this time using parted instead of > fdisk, and you get the idea :DThanks, Fajar! Using this method I could create a single partition on the LV with fdisk, format it as ext3, mount it and untar a vm image on it and boot the vm with pvgrub as before. I then xm destroyed the domU and removed the LV with no problems - result! After this I set about trying to find which of the previous operations was holding the LV in the open state, so I started again with a clean lv and incrementally performed each operation on it and tried to remove it. The error occurs after running: parted /dev/VolGroupVM/testvm mkpartfs primary ext2 0 10240 So, parted is the culprit (or at least the first one to cause the problem). Is there perhaps another, scriptable way to create the partitions on the LV? Matt. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I found this thread: http://lists.alioth.debian.org/pipermail/parted-devel/2007-June/001798.html I am hoping that a more recent version of gnu parted might fix it. My dom0 has parted-1.8.1-23.el5. Cheers, Matt. On Tue, December 29, 2009 10:04 pm, Fajar A. Nugraha wrote:> On Tue, Dec 29, 2009 at 11:56 PM, Matthew Law <matt@webcontracts.co.uk> > wrote: >> I''ve just done some tests on this and you are absolutely right: with no >> partitions it works fine, so either parted, kpartx or whatever is >> causing >> the problem. Now my problem is that I can''t run a partitionless volume >> because I absolutely need to use pvgrub and that doesn''t seem to work > > Yes, that seems to be the case (at least with 3.4.1) > >> without a proper partition table in place or I have screwed up >> somewhere? > > You might be able to compile xen-unstable and take its pvgrub. Should > work. > http://old.nabble.com/-xen-unstable--pvgrub:-Allow-to-work-with-a-partitionless-virtual-disc.-p19609135.html > > Another way to is to investigate why your earlier setup has problems. > To eliminate partition problems, you can map the disk to dom0 like > this: > > modprobe xenblk > xm block-attach 0 phy:/dev/vg_name/lv_name xvda w > ### do your stuff here. fdisk xvda, mkfs, ta, whatever. Use fdisk > instead of parted. > ### don''t forget to umount afterwards > xm block-list 0 > xm block-detach 0 51712 <== 51712 is the devid for xvda > > If that works, then it''s 100% confirmed the problem is with > parted/kpartx. Repeat the test, but this time using parted instead of > fdisk, and you get the idea :D_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Dec-30 03:01 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Wed, Dec 30, 2009 at 6:26 AM, Matthew Law <matt@webcontracts.co.uk> wrote:> After this I set about trying to find which of the previous operations was > holding the LV in the open state, so I started again with a clean lv and > incrementally performed each operation on it and tried to remove it. The > error occurs after running: > > parted /dev/VolGroupVM/testvm mkpartfs primary ext2 0 10240 > > So, parted is the culprit (or at least the first one to cause the > problem). Is there perhaps another, scriptable way to create the > partitions on the LV?I usually use fdisk :D Not scriptable, but has worked great. You might also use sfdisk (available by default), or (like you said) upgrade parted to latest version. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jorge Armando Medina
2009-Dec-30 04:08 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
Matthew Law wrote:> On Tue, December 29, 2009 10:04 pm, Fajar A. Nugraha wrote: > >> Another way to is to investigate why your earlier setup has problems. >> To eliminate partition problems, you can map the disk to dom0 like >> this: >> >> modprobe xenblk >> xm block-attach 0 phy:/dev/vg_name/lv_name xvda w >> ### do your stuff here. fdisk xvda, mkfs, ta, whatever. Use fdisk >> instead of parted. >> ### don''t forget to umount afterwards >> xm block-list 0 >> xm block-detach 0 51712 <== 51712 is the devid for xvda >> >> If that works, then it''s 100% confirmed the problem is with >> parted/kpartx. Repeat the test, but this time using parted instead of >> fdisk, and you get the idea :D >> > > Thanks, Fajar! Using this method I could create a single partition on the > LV with fdisk, format it as ext3, mount it and untar a vm image on it and > boot the vm with pvgrub as before. I then xm destroyed the domU and > removed the LV with no problems - result! > > After this I set about trying to find which of the previous operations was > holding the LV in the open state, so I started again with a clean lv and > incrementally performed each operation on it and tried to remove it. The > error occurs after running: > > parted /dev/VolGroupVM/testvm mkpartfs primary ext2 0 10240 > > So, parted is the culprit (or at least the first one to cause the > problem). Is there perhaps another, scriptable way to create the > partitions on the LV? >You can use fdisk and play with unix standar input, for example: echo "d n p 1 +10000M n p 2 a 1 w " | fdisk /dev/sdx :) consider the blank lines to accept fdisk defaults for fist and last cilinder. of course you can use a onliner: printf ''n\np\n1\n\n+100M\nt\nfd\na\nn\np\n2\n\n\nt\n2\nfd\np\nw\n'' | fdisk /dev/sdx fdisk /dev/sda Best regards.> Matt. > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, December 30, 2009 4:08 am, Jorge Armando Medina wrote:> You can use fdisk and play with unix standar input, for example: > > echo "d > n > p > 1 > > +10000M > n > p > 2 > > > a > 1 > w > " | fdisk /dev/sdx > > :) consider the blank lines to accept fdisk defaults for fist and last > cilinder. > > of course you can use a onliner: > > printf ''n\np\n1\n\n+100M\nt\nfd\na\nn\np\n2\n\n\nt\n2\nfd\np\nw\n'' | fdisk > /dev/sdx > fdisk /dev/sdaSeems like getting a newer version of parted on CentOS is not so straightforward - it has dependencies on a few libs and although they are all installed they are waay too old for the version of parted with the fix in it. This method, or perhaps an expect script to do the same might be the best option in the short term. Thanks, Matt. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Dec-30 10:39 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Wed, Dec 30, 2009 at 4:52 PM, Matthew Law <matt@webcontracts.co.uk> wrote:> Seems like getting a newer version of parted on CentOS is not so > straightforward - it has dependencies on a few libs and although they are > all installed they are waay too old for the version of parted with the fix > in it. This method, or perhaps an expect script to do the same might be > the best option in the short term.Why not use sfdisk? Here''s an example # cat partlist.txt 1,,L,* # lvcreate -L 10G -n testpartlv rootvg Logical volume "testpartlv" created # sfdisk /dev/rootvg/testpartlv < partlist.txt Checking that no-one is using this disk right now ... BLKRRPART: Invalid argument OK Disk /dev/rootvg/testpartlv: 1305 cylinders, 255 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/rootvg/testpartlv: unrecognized partition table type Old situation: No partitions found New situation: Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/rootvg/testpartlv1 * 1 1304 1304 10474380 83 Linux /dev/rootvg/testpartlv2 0 - 0 0 0 Empty /dev/rootvg/testpartlv3 0 - 0 0 0 Empty /dev/rootvg/testpartlv4 0 - 0 0 0 Empty Successfully wrote the new partition table Re-reading the partition table ... BLKRRPART: Invalid argument If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) # lvremove /dev/rootvg/testpartlv Do you really want to remove active logical volume testpartlv? [y/n]: y Logical volume "testpartlv" successfully removed -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks. I''ve got this working now. One last issue remains, it would appear this technique, although it works, is screwing up quotas. I get this error reported: Cannot stat() mounted device /dev/xvda1: No such file or directory I definitely have everything needed for quotas in the domU. I think this is because the quota system expects /dev/xvda1 to exist and it doesn''t because I am mounting the whole disk. Have you come across this before? Many thanks, Matt On Wed, December 30, 2009 10:39 am, Fajar A. Nugraha wrote:> Why not use sfdisk? Here''s an example > > # cat partlist.txt > 1,,L,* > > # lvcreate -L 10G -n testpartlv rootvg > Logical volume "testpartlv" created > > # sfdisk /dev/rootvg/testpartlv < partlist.txt > Checking that no-one is using this disk right now ... > BLKRRPART: Invalid argument > OK > > Disk /dev/rootvg/testpartlv: 1305 cylinders, 255 heads, 63 sectors/track > > sfdisk: ERROR: sector 0 does not have an msdos signature > /dev/rootvg/testpartlv: unrecognized partition table type > Old situation: > No partitions found > New situation: > Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from0> > Device Boot Start End #cyls #blocks Id System > /dev/rootvg/testpartlv1 * 1 1304 1304 10474380 83Linux> /dev/rootvg/testpartlv2 0 - 0 0 0Empty> /dev/rootvg/testpartlv3 0 - 0 0 0Empty> /dev/rootvg/testpartlv4 0 - 0 0 0Empty> Successfully wrote the new partition table > > Re-reading the partition table ... > BLKRRPART: Invalid argument > > If you created or changed a DOS partition, /dev/foo7, say, then usedd(1)> to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512count=1> (See fdisk(8).) > > # lvremove /dev/rootvg/testpartlv > Do you really want to remove active logical volume testpartlv? [y/n]: y > Logical volume "testpartlv" successfully removed_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2010-Jan-04 12:37 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Mon, Jan 4, 2010 at 5:20 PM, Matthew Law <matt@webcontracts.co.uk> wrote:> Thanks. I''ve got this working now. One last issue remains, it would > appear this technique, although it works, is screwing up quotas. I get > this error reported: > > Cannot stat() mounted device /dev/xvda1: No such file or directory > > I definitely have everything needed for quotas in the domU. I think this > is because the quota system expects /dev/xvda1 to exist and it doesn''t > because I am mounting the whole disk. Have you come across this before?udev should create the partition block device file (i.e /dev/xvda1) as necessary. Try doing "fdisk -l" and "ls -la /dev/xvda1". If that doesn''t work, you could always try mapping it as hda/sda instead of xvda. It should work with 2.6.18 kernel. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, January 4, 2010 12:37 pm, Fajar A. Nugraha wrote:> udev should create the partition block device file (i.e /dev/xvda1) as > necessary. Try doing "fdisk -l" and "ls -la /dev/xvda1". > If that doesn''t work, you could always try mapping it as hda/sda > instead of xvda. It should work with 2.6.18 kernel.fdisk -l Disk /dev/xvda: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Disk /dev/xvda doesn''t contain a valid partition table Here are the exact commands I ran to create and partition the LV, based on your previous suggestions: lvcreate -C y -L 40G -n testvm VolGroupVM echo \"1,,L,*\" | sfdisk /dev/VolGroupVM/testvm mkfs.ext3 /dev/mapper/VolGroupVM-testvm Have I missed something? Thanks again, Matt. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2010-Jan-04 13:20 UTC
Re: [Xen-users] Questions on qcow, qcow2 versus LVM
On Mon, Jan 4, 2010 at 8:08 PM, Matthew Law <matt@webcontracts.co.uk> wrote:> On Mon, January 4, 2010 12:37 pm, Fajar A. Nugraha wrote: >> udev should create the partition block device file (i.e /dev/xvda1) as >> necessary. Try doing "fdisk -l" and "ls -la /dev/xvda1". >> If that doesn''t work, you could always try mapping it as hda/sda >> instead of xvda. It should work with 2.6.18 kernel. > > fdisk -l > > Disk /dev/xvda: 42.9 GB, 42949672960 bytes > 255 heads, 63 sectors/track, 5221 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Disk identifier: 0x00000000 > > Disk /dev/xvda doesn''t contain a valid partition table > > Here are the exact commands I ran to create and partition the LV, based on > your previous suggestions: > > lvcreate -C y -L 40G -n testvm VolGroupVM > echo \"1,,L,*\" | sfdisk /dev/VolGroupVM/testvm > mkfs.ext3 /dev/mapper/VolGroupVM-testvm > > Have I missed something?That''s wrong :) The commands to sfdisk were intended to automatically create one partition table, where you can use kpartx later. Basically it replace parted with sfdisk. Instead, you do mkfs on /dev/VolGroupVM/testvm directly? You got two choices: (1) mkfs directly on /dev/VolGroupVM/testvm. In this scenario you won''t need kpartx or sfdisk. It''d be better if you assign the LV directly as xvda1 (or hda1/sda1) instead of what you did now (assign it as xvda). (2) back to your original setup, replacing parted with sfdisk. In this case the sfdisk command line becames echo "1,,L,*" | sfdisk /dev/VolGroupVM/testvm Note how I didn''t escape the quote. verify that the partition table gets created correctly afterwards (see my previous mail), then you stilll need to do kpartx like you did in your original script. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, January 4, 2010 1:20 pm, Fajar A. Nugraha wrote:> That''s wrong :) > The commands to sfdisk were intended to automatically create one > partition table, where you can use kpartx later. Basically it replace > parted with sfdisk. Instead, you do mkfs on /dev/VolGroupVM/testvm > directly? > > You got two choices: > (1) mkfs directly on /dev/VolGroupVM/testvm. > In this scenario you won''t need kpartx or sfdisk. It''d be better if > you assign the LV directly as xvda1 (or hda1/sda1) instead of what you > did now (assign it as xvda). > > (2) back to your original setup, replacing parted with sfdisk. > In this case the sfdisk command line becames > > echo "1,,L,*" | sfdisk /dev/VolGroupVM/testvm > > Note how I didn''t escape the quote. verify that the partition table > gets created correctly afterwards (see my previous mail), then you > stilll need to do kpartx like you did in your original script.Thanks - I thought I''d missed something. The commands I dragged out of my ruby script, so the quotes aren''t really being escaped, I just forgot to take them out before sending the mail :) I''ll add the kpartx command back in after the call to sfdisk and try again. Thanks again, Matt. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users