-----Original Message----- From: Richard W.M. Jones [mailto:rjones@redhat.com] Sent: Friday, January 17, 2014 4:40 PM To: Исаев Виталий Анатольевич Cc: libguestfs@redhat.com Subject: Re: [Libguestfs] LVM mounting issue On Fri, Jan 17, 2014 at 09:45:34AM +0000, Исаев Виталий Анатольевич wrote:> Be sure, that “unknown device” was not written by me :)>> I use libguestfs 1.16.34:> [root@rhevh1 ~]# rpm -qa | grep guest> libguestfs-1.16.34-2.el6.x86_64> libguestfs-winsupport-1.0-7.el6.x86_64> python-libguestfs-1.16.34-2.el6.x86_64> libguestfs-tools-c-1.16.34-2.el6.x86_64This is a bug. I have filed a bug about this issue: https://bugzilla.redhat.com/show_bug.cgi?id=1054761 However it does indicate that you are not presenting all physical volumes to libguestfs, which means you're probably not adding every device that belongs to the guest. That's going to cause other problems for you.> Full trace of the guestfish session is attached to this message.> ><fs> add-ro /dev/dm-40Based on your reply in bug 1053684 I still don't know which is the correct device name to open, but it's obviously not /dev/dm-40. I spent quite a long time yesterday trying to get someone from the oVirt team to help out, but without success so far. I will keep you informed on bug 1053684. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming blog: http://rwmj.wordpress.com Fedora now supports 80 OCaml packages (the OPEN alternative to F#) Rich, thank you for your patience and advices. It seems for me that we mixed two different problems: 1. Problems while accessing ANY of thin provisioned (qcow2) disk with libguestfs 1.16.34 on the hypervisor (it’s discussed in bug 1053684); 2. Problems while mounting SOME of disk images with libguestfs 1.16.34 (strictly speaking there are only 2 VMs with such problem of 17 ones I have on my cluster); However, both of the mentioned issues require the correct disk images paths to be provided. But as you say that /dev/dm-XX devices are obviously not suitable for usage with libguestfs, I would ask you for a last thing – to check all my steps on the way to define and access the disk image. May be there is an error in my logic. 1. I want to inspect VM’s disk image. There are two disk images belonging to this VM (look at the “vm” xml file attached); 2. I determine the disk_image_id of the VM’s bootable disk (look at the image_id node in “disks” xml file attached); 3. Now I go to the RHEV-H to look for the disk image itself: [root@rhevh1 /]# find / -name cc6e4400-7c98-4170-9075-5f5790dfcff3 /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cc6e4400-7c98-4170-9075-5f5790dfcff3 /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3 /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3 4. Note that all these files are symbolic links: [root@rhevh1 /]# find / -name cc6e4400-7c98-4170-9075-5f5790dfcff3 -exec readlink -f {} \; /dev/dm-40 /dev/dm-40 /dev/dm-40 5. One more symbolic link is in /dev/mapper: [root@rhevh1 /]# ls -l /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-cc6e4400--7c98--4170--9075--5f5790dfcff3 lrwxrwxrwx. 1 root root 8 2013-11-20 10:59 /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-cc6e4400--7c98--4170--9075--5f5790dfcff3 -> ../dm-40 6. So I have no choice and I try to open /dev/dm-40 with libguestfs or guestfish. What's next, you already know. I apologize for spending your time again, but please evaluate the proposed method of disk image definition. Sincerely, Vitaly Isaev Software engineer Information security department Fintech JSC, Moscow, Russia
On Fri, Jan 17, 2014 at 02:38:43PM +0000, Исаев Виталий Анатольевич wrote:> 3. Now I go to the RHEV-H to look for the disk image itself: > > [root@rhevh1 /]# find / -name cc6e4400-7c98-4170-9075-5f5790dfcff3 > /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cc6e4400-7c98-4170-9075-5f5790dfcff3 > /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3 > /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3 > > 4. Note that all these files are symbolic links: > > [root@rhevh1 /]# find / -name cc6e4400-7c98-4170-9075-5f5790dfcff3 -exec readlink -f {} \; > /dev/dm-40 > /dev/dm-40 > /dev/dm-40 > > 5. One more symbolic link is in /dev/mapper: > > [root@rhevh1 /]# ls -l /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-cc6e4400--7c98--4170--9075--5f5790dfcff3 > lrwxrwxrwx. 1 root root 8 2013-11-20 10:59 /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-cc6e4400--7c98--4170--9075--5f5790dfcff3 -> ../dm-40 > > 6. So I have no choice and I try to open /dev/dm-40 with libguestfs or guestfish. What's next, you already know.You definitely do have a choice. Don't open /dev/dm-40. Open one of the other paths instead, eg: virt-inspector2 -v -x -a /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3 It makes a big difference to qemu which path you use, because it searches for backing disks relative to the path of the original disk image. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW
-----Original Message----- From: Richard W.M. Jones [mailto:rjones@redhat.com] Sent: Friday, January 17, 2014 6:46 PM To: Исаев Виталий Анатольевич Cc: libguestfs@redhat.com Subject: Re: [Libguestfs] LVM mounting issue On Fri, Jan 17, 2014 at 02:38:43PM +0000, Исаев Виталий Анатольевич wrote:> 3. Now I go to the RHEV-H to look for the disk image itself:>> [root@rhevh1 /]# find / -name cc6e4400-7c98-4170-9075-5f5790dfcff3> /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cc6e4400-7c98-4170-9075-5f57> 90dfcff3> /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f> -4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc> 6e4400-7c98-4170-9075-5f5790dfcff3> /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/ima> ges/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f579> 0dfcff3>> 4. Note that all these files are symbolic links:>> [root@rhevh1 /]# find / -name cc6e4400-7c98-4170-9075-5f5790dfcff3> -exec readlink -f {} \;> /dev/dm-40> /dev/dm-40> /dev/dm-40>> 5. One more symbolic link is in /dev/mapper:>> [root@rhevh1 /]# ls -l> /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-cc6e4400--7c98--4> 170--9075--5f5790dfcff3 lrwxrwxrwx. 1 root root 8 2013-11-20 10:59> /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-cc6e4400--7c98--4> 170--9075--5f5790dfcff3 -> ../dm-40>> 6. So I have no choice and I try to open /dev/dm-40 with libguestfs or guestfish. What's next, you already know.You definitely do have a choice. Don't open /dev/dm-40. Open one of the other paths instead, eg: virt-inspector2 -v -x -a /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3 It makes a big difference to qemu which path you use, because it searches for backing disks relative to the path of the original disk image. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW Hello, Richard, I apologise for the late reply. It took me some time to work with a decision you have proposed and to continue discussion in bug 1053684<https://bugzilla.redhat.com/show_bug.cgi?id=1053684>. What I tried to do is to launch libguestfs tests with disk images not from /dev/mapper or directly from /dev/ but from /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/<…> instead. Unfortunately, the results were poor: I could not even access some of the images with libguestfs, whereas before I succeeded to launch libguestfs with every image from /dev/dm-xx properly. I launched the script (test2.py is attached to this message) on both nodes of my cluster (node1 and node2 outputs are attached too) in order to test libguestfs work with your approach. Consider the following table (all vms are running): VM RUNNING ON NODE DISK IS ACCESSIBLE ON A NODE* build_list 1 1 build-ss 1 1 build-ss001 1 - build-ss002 1 - fs 2 1,2 ipa1 1 - koji-build-test 2 - koji_hub 1 1 koji-hub-test 2 - postgres 1 - share 2 1,2 test1 1 1 ts2 1 - vc2 1 1 vmbuild 1 1 win7_32 2 1,2 winxp 2 1,2 * - disk was found in /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/ and could be handled correctly with libguestfs tool So you can see that I did not manage to run libguestfs with disk 7 images. (It’s also quite strange for me that some disks are accesible on both nodes while the other disks are mapped to the only node – but this is oVirt-specified question). However, Comment 14<https://bugzilla.redhat.com/show_bug.cgi?id=1053684#c14> contains my workaround of this problem (recursive resolving of qcow2 disks into raw discs using `qemu-img` output). This workaround allowed accessing all the disks with libguestfs. Finally I would like to draw you attention to the machine that is different from all the other VMs (see the node1 output file). This the one from which we started this thread. It has a RHEL 6.4 on a board and working fine. And it is the only VM which operating system is not able to be detected by libguestfs. Even when I launch libguestfs from the folder you recommended in your last message, I receive the same (virt-inspector2 -v -x output is attached to this message too): ------------------------------------------------------- 2 --------------------------------------------------------------------- VM: 'build-ss' - disk_image_id: cc6e4400-7c98-4170-9075-5f5790dfcff3 Trying to open /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/8a3e02de-d8ab-4357-ba8c-490f3ba3e85c/cc6e4400-7c98-4170-9075-5f5790dfcff3 guestfs succesfully launched Physical volumes: ['/dev/vda2', 'unknown device'] Logical volumes: ['/dev/vg_kojit/lv_root', '/dev/vg_kojit/lv_swap'] Partitions: ['/dev/vda1', '/dev/vda2'] Operating systems: [] May be you have some idea about repairing this VM in order to provide the full libguestfs functionality? I need to get mountpoints and mount a filesystem. Thank you in advance! Sincerely, Vitaly Isaev Software engineer Information security department Fintech JSC, Moscow, Russia
Apparently Analagous Threads
- Re: LVM mounting issue
- Re: Libguestfs can't launch with one of the disk images in the RHEV cluster
- Re: Libguestfs can't launch with one of the disk images in the RHEV cluster
- Re: LVM mounting issue
- Re: Libguestfs can't launch with one of the disk images in the RHEV cluster