I've been running into a reproducible problem when using default LVM volume group names to present block devices for virtual machines in KVM, and I'm wondering why it is happening. On dom0 I make a default VolGroup00 for the operating system. I make a second VolGroup01 for logical volumes that will be block devices for virtual systems. In VolGroup01, I make two lv's for one system: lv.sys1, and lv.sys1-data. I then build a new virtual machine called sys1, using lv.sys1 for the root filesystem, and lv.sys1-data for an independent data partition. Everything works great after installation, and vgdisplay on both systems looks great. If I then run vgscan, however, on the host system, it picks up the VolGroup01 I created _within_ the virtual machine, so I now have 2 VolGroup01's with different UUIDs showing up on dom0. Now I can see how vgscan would mistakenly see the VolGroup01 of sys1 on the block device lv.sys1-data, but why are the VolGroup00 vg's not colliding as well? When a pvdisplay is run, I have a new "physical volume" that is actually just a logical volume of the original VolGroup01: [root at iain2 ~]# pvdisplay WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi --- Physical volume --- PV Name /dev/VolGroup01/lv-sys1-data VG Name VolGroup01 PV Size 40.00 GB / not usable 4.00 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 10239 Free PE 0 Allocated PE 10239 PV UUID FTA4QU-ydZ7-e2Yy-nBsi-t4st-3jj7-IAkQH8 --- Physical volume --- PV Name /dev/sda3 VG Name VolGroup00 PV Size 39.06 GB / not usable 29.77 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 1249 Free PE 0 Allocated PE 1249 PV UUID tTViks-3lBM-HGzV-mnN9-zRsT-fFT0-ZsJRse --- Physical volume --- PV Name /dev/sda2 VG Name VolGroup01 PV Size 240.31 GB / not usable 25.75 MB Allocatable yes PE Size (KByte) 32768 Total PE 7689 Free PE 5129 Allocated PE 2560 PV UUID ZE5Io3-WYIO-EfOQ-h03q-zGdF-Frpa-tm63fX Has anyone experienced this? It's very unnerving to know your data is intact as you add new logical volumes for kvm systems. I suppose the lesson learned here is to provide VGs with specific host names. -- -- - Iain Morris iain.t.morris at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20101025/485d409f/attachment-0001.html>
On 10/25/2010 12:31 PM, Iain Morris wrote:> I then build a new virtual machine called sys1, using lv.sys1 for the > root filesystem, and lv.sys1-data for an independent data partition. > Everything works great after installation, and vgdisplay on both > systems looks great. > > If I then run vgscan, however, on the host system, it picks up the > VolGroup01 I created _within_ the virtual machine, so I now have 2 > VolGroup01's with different UUIDs showing up on dom0.Which block devices are you exporting to your guest? Post the libvirt configuration file for it.
On Oct 25, 2010, at 3:31 PM, Iain Morris <iain.t.morris at gmail.com> wrote:> I've been running into a reproducible problem when using default LVM volume group names to present block devices for virtual machines in KVM, and I'm wondering why it is happening. > > On dom0 I make a default VolGroup00 for the operating system. I make a second VolGroup01 for logical volumes that will be block devices for virtual systems. > > In VolGroup01, I make two lv's for one system: lv.sys1, and lv.sys1-data. > > I then build a new virtual machine called sys1, using lv.sys1 for the root filesystem, and lv.sys1-data for an independent data partition. Everything works great after installation, and vgdisplay on both systems looks great. > > If I then run vgscan, however, on the host system, it picks up the VolGroup01 I created _within_ the virtual machine, so I now have 2 VolGroup01's with different UUIDs showing up on dom0. > > Now I can see how vgscan would mistakenly see the VolGroup01 of sys1 on the block device lv.sys1-data, but why are the VolGroup00 vg's not colliding as well? > > When a pvdisplay is run, I have a new "physical volume" that is actually just a logical volume of the original VolGroup01: > > [root at iain2 ~]# pvdisplay > WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi > WARNING: Duplicate VG name VolGroup01: Existing FNiKc9-BB3t-ziMg-prWW-n8RA-OMzk-obiKnf (created here) takes precedence over C8fNMV-aeSW-syIn-fWJZ-vJdK-N0As-Itrvfi > --- Physical volume --- > PV Name /dev/VolGroup01/lv-sys1-data > VG Name VolGroup01 > PV Size 40.00 GB / not usable 4.00 MB > Allocatable yes (but full) > PE Size (KByte) 4096 > Total PE 10239 > Free PE 0 > Allocated PE 10239 > PV UUID FTA4QU-ydZ7-e2Yy-nBsi-t4st-3jj7-IAkQH8 > > --- Physical volume --- > PV Name /dev/sda3 > VG Name VolGroup00 > PV Size 39.06 GB / not usable 29.77 MB > Allocatable yes (but full) > PE Size (KByte) 32768 > Total PE 1249 > Free PE 0 > Allocated PE 1249 > PV UUID tTViks-3lBM-HGzV-mnN9-zRsT-fFT0-ZsJRse > > --- Physical volume --- > PV Name /dev/sda2 > VG Name VolGroup01 > PV Size 240.31 GB / not usable 25.75 MB > Allocatable yes > PE Size (KByte) 32768 > Total PE 7689 > Free PE 5129 > Allocated PE 2560 > PV UUID ZE5Io3-WYIO-EfOQ-h03q-zGdF-Frpa-tm63fX > > > > Has anyone experienced this? It's very unnerving to know your data is intact as you add new logical volumes for kvm systems. I suppose the lesson learned here is to provide VGs with specific host names.You need to exclude the LVs in the host VG from being scanned for sub-VGs. It's actually easier to just list what SHOULD be scanned rather than what shouldn't. Look in /etc/lvm/lvm.conf You can also avoid this by creating partition based PVs in the VMs rather then whole disk PVs which would need kpartx run on the host LV before LVM could scan it. -Ross
Possibly Parallel Threads
- Help recovering from an LVM issue
- Help -- LVM snapshot full -- how do I recover?
- Building and Installing Xen 4.3 in Fedora11
- fresh install of centos looking for non-existant /dev/hda : /dev/hda: open failed: No medium found
- Virt-install Error on Centos 5.4 64bit and kvm