Исаев Виталий Анатольевич
2014-Jan-14 14:57 UTC
Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster
-----Original Message----- From: Richard W.M. Jones [mailto:rjones@redhat.com] Sent: Tuesday, January 14, 2014 4:42 PM To: Исаев Виталий Анатольевич Cc: libguestfs@redhat.com Subject: Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster On Tue, Jan 14, 2014 at 08:07:43AM +0000, Исаев Виталий Анатольевич wrote:> [00072ms] /usr/libexec/qemu-kvm \> -global virtio-blk-pci.scsi=off \> -nodefconfig \> -nodefaults \> -nographic \> -drive file=/dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f,snapshot=on,if=virtio \> -nodefconfig \> -machine accel=kvm:tcg \> -m 500 \> -no-reboot \> -device virtio-serial \> -serial stdio \> -device sga \> -chardev socket,path=/tmp/libguestfs2yVhoH/guestfsd.sock,id=channel0 \> -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \> -kernel /var/tmp/.guestfs-0/kernel.3269 \> -initrd /var/tmp/.guestfs-0/initrd.3269 \> -append 'panic=1 console=ttyS0 udevtimeout=300 no_timer_check acpi=off printk.time=1 cgroup_disable=memory selinux=0 guestfs_verbose=1 TERM=xterm ' \> -drive> file=/var/tmp/.guestfs-0/root.3269,snapshot=on,if=virtio,cache=unsafeq> emu-kvm: -drive> file=/dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da> 73--4465--aed2--912119fcf67f,snapshot=on,if=virtio: could not open> disk image> /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4> 465--aed2--912119fcf67f: No such file or directory> libguestfs: child_cleanup: 0x23dc5d0: child process died> libguestfs: trace: launch = -1 (error)libguestfs runs qemu with the command line above. qemu tries to open the /dev/mapper/1a9... file. qemu reports that it cannot open that file. Unfortunately qemu's error messages are very poor. However there are a few possibilities: (a) An actual permissions issue. Since you seem to be running this as root, this doesn't seem to be likely, but you should check that anyway. Are there SELinux AVCs? (b) qemu cannot open the backing file. Try running: qemu-img info /dev/mapper/1a9... and if it has a backing file, check that the backing file(s) [recursively] can be opened too. (c) Also check that the backing file paths are not relative. If they are you will need to run your script from the correct directory so that the relative paths are accessible. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v Dear Rich, thank you for a prompt reply on my question. The similar problems have been found with all of the rest Thin Provisioned disks in the cluster, while all the Preallocated disks were handled with libguestfs correctly. I guess these issues were caused by (b) and probably (c) reasons: The backing file of any of the thin provisioned disks does not exist. For instance let’s consider the /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f symbolic link pointing to /dev/dm-30 : [root@rhevh1 mapper]# pwd /dev/mapper [root@rhevh1 mapper]# qemu-img info 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f image: 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f file format: qcow2 virtual size: 40G (42949672960 bytes) disk size: 0 cluster_size: 65536 backing file: ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5 [root@rhevh1 mapper]# ll ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5 ls: cannot access ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5: No such file or directory Note that /dev/dm-30 is not accessible with libguestfs. Now I am trying to find the files with the same name. As a result I receive three symbolic links pointing to /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5: [root@rhevh1 mapper]# find / -name cbe36298-6397-4ffa-ba8c-5f64e90023e5 /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5 /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5 /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5 In turn, the /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5 file is a symbolic link which points to the /dev/dm-19 At last I am trying to launch libguestfs with block device directly: [root@rhevh1 mapper]# qemu-img info /dev/dm-19 image: /dev/dm-19 file format: raw virtual size: 40G (42949672960 bytes) disk size: 0 [root@rhevh1 mapper]# python Python 2.6.6 (r266:84292, Oct 12 2012, 14:23:48) [GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information.>>> import guestfs >>> g = guestfs.GuestFS() >>> g.add_drive_opts("/dev/dm-19",readonly=1) >>> g.launch() >>> g.lvs()[]>>> g.pvs()[]>>> g.list_partitions()['/dev/vda1', '/dev/vda2']>>> g.inspect_os()['/dev/vda1'] Now I’m a little bit confused with the results of my research. I found that VM with the only disk attached has at least two block devices mapped to the hypervisor’s file system in fact – I mean /dev/dm-19 (raw) and /dev/dm-30 (qcow2). The RHEV-M API (aka Python oVirt SDK) provides no info about the first one, but the second cannot be accessed from libguestfs. I have an urgent need to work with a chosen VM disk images through the libguestfs layer, but I don’t know which images belong to every VM exactly. It seems like I’m going the hard way :) Sincerely, Vitaly Isaev Software engineer Information security department Fintech JSC, Moscow, Russia
Richard W.M. Jones
2014-Jan-14 17:42 UTC
Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster
On Tue, Jan 14, 2014 at 02:57:35PM +0000, Исаев Виталий Анатольевич wrote:> Dear Rich, thank you for a prompt reply on my question. The similar > problems have been found with all of the rest Thin Provisioned disks > in the cluster, while all the Preallocated disks were handled with > libguestfs correctly. I guess these issues were caused by (b) and > probably (c) reasons: > > The backing file of any of the thin provisioned disks does not exist. For instance let’s consider the /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f symbolic link pointing to /dev/dm-30 : > [root@rhevh1 mapper]# pwd > /dev/mapper > [root@rhevh1 mapper]# qemu-img info 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f > image: 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f > file format: qcow2 > virtual size: 40G (42949672960 bytes) > disk size: 0 > cluster_size: 65536 > backing file: ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5 > [root@rhevh1 mapper]# ll ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5 > ls: cannot access ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5: No such file or directory > > Note that /dev/dm-30 is not accessible with libguestfs. > > Now I am trying to find the files with the same name. As a result I receive three symbolic links pointing to /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5: > [root@rhevh1 mapper]# find / -name cbe36298-6397-4ffa-ba8c-5f64e90023e5 > /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5 > /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5 > /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e90023e5 > > In turn, the /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5 file is a symbolic link which points to the /dev/dm-19 > > At last I am trying to launch libguestfs with block device directly: > [root@rhevh1 mapper]# qemu-img info /dev/dm-19 > image: /dev/dm-19 > file format: raw > virtual size: 40G (42949672960 bytes) > disk size: 0 > [root@rhevh1 mapper]# python > Python 2.6.6 (r266:84292, Oct 12 2012, 14:23:48) > [GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import guestfs > >>> g = guestfs.GuestFS() > >>> g.add_drive_opts("/dev/dm-19",readonly=1) > >>> g.launch() > >>> g.lvs() > [] > >>> g.pvs() > [] > >>> g.list_partitions() > ['/dev/vda1', '/dev/vda2'] > >>> g.inspect_os() > ['/dev/vda1']This works because you're accessing the backing disk, not the top disk. Since the backing disk (in this case) doesn't itself have a backing disk, qemu has no problem opening it.> Now I’m a little bit confused with the results of my research. I > found that VM with the only disk attached has at least two block > devices mapped to the hypervisor’s file system in fact – I mean > /dev/dm-19 (raw) and /dev/dm-30 (qcow2). The RHEV-M API (aka Python > oVirt SDK) provides no info about the first one, but the second > cannot be accessed from libguestfs. I have an urgent need to work > with a chosen VM disk images through the libguestfs layer, but I > don’t know which images belong to every VM exactly. It seems like > I’m going the hard way :) Sincerely,Basically you need to find out which directory RHEV-M itself starts qemu in. Try going onto the node and doing: ps ax | grep qemu ls -l /proc/PID/cwd substituting PID for some of the qemu process IDs. My guess would be some subdirectory of /rhev/data-center/mnt/blockSD/ Then start your test script from that directory. Another thing you could do is to file a bug against oVirt asking them not to use relative paths for backing disks, since plenty of people have problems with this. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org
Исаев Виталий Анатольевич
2014-Jan-16 07:29 UTC
Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster
-----Original Message----- From: Richard W.M. Jones [mailto:rjones@redhat.com] Sent: Tuesday, January 14, 2014 9:43 PM To: Исаев Виталий Анатольевич Cc: libguestfs@redhat.com Subject: Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster On Tue, Jan 14, 2014 at 02:57:35PM +0000, Исаев Виталий Анатольевич wrote:> Dear Rich, thank you for a prompt reply on my question. The similar> problems have been found with all of the rest Thin Provisioned disks> in the cluster, while all the Preallocated disks were handled with> libguestfs correctly. I guess these issues were caused by (b) and> probably (c) reasons:>> The backing file of any of the thin provisioned disks does not exist. For instance let’s consider the /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f symbolic link pointing to /dev/dm-30 :> [root@rhevh1 mapper]# pwd> /dev/mapper> [root@rhevh1 mapper]# qemu-img info> 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--9> 12119fcf67f> image:> 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--9> 12119fcf67f> file format: qcow2> virtual size: 40G (42949672960 bytes)> disk size: 0> cluster_size: 65536> backing file:> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9> 0023e5> [root@rhevh1 mapper]# ll> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9> 0023e5> ls: cannot access> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9> 0023e5: No such file or directory>> Note that /dev/dm-30 is not accessible with libguestfs.>> Now I am trying to find the files with the same name. As a result I receive three symbolic links pointing to /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5:> [root@rhevh1 mapper]# find / -name> cbe36298-6397-4ffa-ba8c-5f64e90023e5> /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64> e90023e5> /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f> -4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cb> e36298-6397-4ffa-ba8c-5f64e90023e5> /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/ima> ges/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e> 90023e5>> In turn, the> /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64> e90023e5 file is a symbolic link which points to the /dev/dm-19>> At last I am trying to launch libguestfs with block device directly:> [root@rhevh1 mapper]# qemu-img info /dev/dm-19> image: /dev/dm-19> file format: raw> virtual size: 40G (42949672960 bytes)> disk size: 0> [root@rhevh1 mapper]# python> Python 2.6.6 (r266:84292, Oct 12 2012, 14:23:48) [GCC 4.4.6 20120305> (Red Hat 4.4.6-4)] on linux2 Type "help", "copyright", "credits" or> "license" for more information.> >>> import guestfs> >>> g = guestfs.GuestFS()> >>> g.add_drive_opts("/dev/dm-19",readonly=1)> >>> g.launch()> >>> g.lvs()> []> >>> g.pvs()> []> >>> g.list_partitions()> ['/dev/vda1', '/dev/vda2']> >>> g.inspect_os()> ['/dev/vda1']This works because you're accessing the backing disk, not the top disk. Since the backing disk (in this case) doesn't itself have a backing disk, qemu has no problem opening it.> Now I’m a little bit confused with the results of my research. I found> that VM with the only disk attached has at least two block devices> mapped to the hypervisor’s file system in fact – I mean> /dev/dm-19 (raw) and /dev/dm-30 (qcow2). The RHEV-M API (aka Python> oVirt SDK) provides no info about the first one, but the second cannot> be accessed from libguestfs. I have an urgent need to work with a> chosen VM disk images through the libguestfs layer, but I don’t know> which images belong to every VM exactly. It seems like I’m going the> hard way :) Sincerely,Basically you need to find out which directory RHEV-M itself starts qemu in. Try going onto the node and doing: ps ax | grep qemu ls -l /proc/PID/cwd substituting PID for some of the qemu process IDs. My guess would be some subdirectory of /rhev/data-center/mnt/blockSD/ Then start your test script from that directory. Another thing you could do is to file a bug against oVirt asking them not to use relative paths for backing disks, since plenty of people have problems with this. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org Thank you, Richard. I’ve posted a message to the RH bugtracker: https://bugzilla.redhat.com/show_bug.cgi?id=1053684 Further work on this problem made the things even more complicated, because I found that several qcow2 disks have qcow2 disks as backing disks in turn. So now I have to resolve qcow2 to raw disks recursively in order to access them with libguestfs. Vitaly Isaev Software engineer Information security department Fintech JSC, Moscow, Russia
Maybe Matching Threads
- Re: Libguestfs can't launch with one of the disk images in the RHEV cluster
- Libguestfs can't launch with one of the disk images in the RHEV cluster
- Re: LVM mounting issue
- Re: LVM mounting issue
- Re: Libguestfs can't launch with one of the disk images in the RHEV cluster