Hi, Not totally new to Xen but still very green and meeting some problems. Feel free to kick me to the DRBD people if this is not relevent here. I''ll be providing more info upon request but for now I''ll be brief. Debian/Squeeze running 2.6.32-5-xen-amd64 (2.6.32-21) Xen hypervisor 4.0.1~rc6-1 and drbd-8.3.8. One domU configured, with disk and swap image: root = ''/dev/xvda2 ro'' disk = [ ''file:/xen_cluster/r1/disk.img,xvda2,w'', ''file:/xen_cluster/r2/swap.img,xvda1,w'', ] /xen_cluster/r{1,2} are OCFS2 filesystems on top of 2 drbd resources, r1 and r2, shared between 2 servers. drbd backing devices are LVs on top of a md raid1 mirror. I can create and do DomU live migration between 2 hosts so far. good. I want this to be ultimately running under pacemaker/corosync/openais. So following the drbd users guide I modified this to: root = ''/dev/xvda2 ro'' disk = [ ''drbd:r1,xvda2,w'', ''drbd:r2,xvda1,w'', ] Trying to create the DomU leads to: xm create -c xennode-1.cfg Using config file "/xen_cluster/r1/xennode-1.cfg". Error: Device 51714 (vbd) could not be connected. Hotplug scripts not working. Not quite sure what to do now. Thanks for the inputs. jf _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Can you please provide me a copy of your domU config. -- View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3046546.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
* khris4 <khris4@gmail.com> [20100930 00:50]:> > Can you please provide me a copy of your domU config.This works: kernel = ''/boot/vmlinuz-2.6.32-5-xen-amd64'' ramdisk = ''/boot/initrd.img-2.6.32-5-xen-amd64'' vcpus = ''1'' memory = ''2048'' root = ''/dev/xvda2 ro'' disk = [ ''file:/xen_cluster/r1/xennode-1/disk.img,xvda2,w'', ''file:/xen_cluster/r2/xennode-1/swap.img,xvda1,w'', ] name = ''xennode-1'' vif = [ ''ip=xxx.xxx.xxx.xxx,mac=00:16:3E:B7:6F:70,bridge=eth0'' ] on_poweroff = ''destroy'' on_reboot = ''restart'' on_crash = ''restart'' If I replace the disk entry with disk = [ ''drbd:r1,xvda2,w'', ''drbd:r2,xvda1,w'', ] the Domu doesn''t come up. /xen_cluster/r1 is the mount point for drbd resource r1. Same for r2. jf> -- > View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3046546.html > Sent from the Xen - User mailing list archive at Nabble.com. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Do you have a copy of your xend.log and qemu-dm-"the name of your domu".logs? -- View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3047365.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
* khris4 <khris4@gmail.com> [20100930 13:31]:> > Do you have a copy of your xend.log and qemu-dm-"the name of your domu".logs?I''ve attached the xend.log when trying to create the domU. I see nothing for qemu in /var/log/libvirt/qemu/. regards, jf> -- > View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3047365.html > Sent from the Xen - User mailing list archive at Nabble.com. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I forgot to ask where is your copy of drbd.confg? -- View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3047649.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
* khris4 <khris4@gmail.com> [20100930 16:56]:> > I forgot to ask where is your copy of drbd.confg?/etc/drbd.d/r{1,2}.res resource r1 { device /dev/drbd1; disk /dev/xen_vg/xen_lv1; meta-disk internal; startup { degr-wfc-timeout 30; wfc-timeout 30; become-primary-on both; } net { allow-two-primaries; cram-hmac-alg sha1; shared-secret "lucid"; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; rr-conflict disconnect; } disk { fencing resource-only; on-io-error detach; } handlers { # these handlers are necessary for drbd 8.3 + pacemaker compatibility fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; outdate-peer "/usr/lib/drbd/outdate-peer.sh"; split-brain "/usr/lib/drbd/notify-split-brain.sh root"; pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh root"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh root"; local-io-error "/usr/lib/drbd/notify-io-error.sh malin"; } syncer { rate 24M; csums-alg sha1; al-extents 727; } on node1 { address 10.0.0.1:7789; } on node2 { address 10.0.0.2:7789; } } resource r2 { device /dev/drbd2; disk /dev/xen_vg/xen_lv2; meta-disk internal; startup { degr-wfc-timeout 30; wfc-timeout 30; become-primary-on both; } net { allow-two-primaries; cram-hmac-alg sha1; shared-secret "lucid"; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; rr-conflict disconnect; } disk { fencing resource-only; on-io-error detach; } handlers { # these handlers are necessary for drbd 8.3 + pacemaker compatibility fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; outdate-peer "/usr/lib/drbd/outdate-peer.sh"; split-brain "/usr/lib/drbd/notify-split-brain.sh root"; pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh root"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh root"; local-io-error "/usr/lib/drbd/notify-io-error.sh malin"; } syncer { rate 24M; csums-alg sha1; al-extents 727; } on node1 { address 10.0.0.1:7790; } on node2 { address 10.0.0.2:7790; } } jf> -- > View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3047649.html > Sent from the Xen - User mailing list archive at Nabble.com. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
okay, now my memory is coming back on setting up DRBD. The reason it''s not working is because your tell xen to start the domu with drbd script. If you look at the block-drbd script in the /etc/xen/script/ directory to wil see the drbd only handles making the drbd resource primary and then hands off the device listed under disk in your drbd config to the domu to boot the operation system. Let give you an example, Here is my setup at my office. hardware raid10 --> lvm -> drbd -> domu <- what this mean is my server has raid into setup in lvm, then I create lv''s for each drbd virtual disk I wanted to create. so /dev/XEN00/proxy --> is setup as a disk in drbd.conf as resource proxy. Now when you look in my domU proxy config you will see this disk = [ ''drbd:proxy,xvda,w'' ]. This works because each virtual drbd resource has a lv parition with an operating system behind it. The reason your setup is not working is because you handing the domu an drbd virtual device that has file-system behind it and when you domu boot up xen doesn''t know it need to mount the drbd and then give a file back-end. You going to need to write a script to handle some of this work or ask if anyone had a script already made. This is base off your setup below, if I understand what you are trying to do. hardware raid -> drbd -> file-system -> domu -- View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3047692.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
* khris4 <khris4@gmail.com> [20100930 17:19]:> > okay, now my memory is coming back on setting up DRBD. The reason it''s not > working is because your tell xen to start the domu with drbd script. If you > look at the block-drbd script in the /etc/xen/script/ directory to wil see > the drbd only handles making the drbd resource primary and then hands off > the device listed under disk in your drbd config to the domu to boot the > operation system. Let give you an example, > > Here is my setup at my office. > > hardware raid10 --> lvm -> drbd -> domu <- what this mean is my server has > raid into setup in lvm, then I create lv''s for each > drbd virtual disk I wanted to create. so /dev/XEN00/proxy --> is setup as a > disk in drbd.conf as resource proxy. Now when you look in my domU proxy > config you will see this disk = [ ''drbd:proxy,xvda,w'' ]. This works because > each virtual drbd resource has a lv parition with an operating system behind > it. > > The reason your setup is not working is because you handing the domu an drbd > virtual device that has file-system behind it and when you domu boot up xen > doesn''t know it need to mount the drbd and then give a file back-end. You > going to need to write a script to handle some of this work or ask if anyone > had a script already made.hmmm, now I''m really confused. This is what I''m doing on one node: raid1 ----> lv -> drbd (r1) -> ocfs2 (mount point /xen_cluster/r1) \-> lv -> drbd (r2) -> ocfs2 (mount point /xen_cluster/r2) so the VBD should look like disk = [ ''file:/xen_cluster/r1/xennode-1/disk.img,xvda2,w'', ''file:/xen_cluster/r2/xennode-1/swap.img,xvda1,w'', ] as I''ve read in the drbd user guide on how to integrate drbd, ocfs2 and Xen. If I use that VBD file: type in the DomU config I can perform live migration. I don''t understand what you''re telling me here! what is /dev/XEN00/proxy? sorry for being dense, jf> > This is base off your setup below, if I understand what you are trying to > do. > > hardware raid -> drbd -> file-system -> domu > -- > View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3047692.html > Sent from the Xen - User mailing list archive at Nabble.com. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Okay I wasn''t understanding what your setup is for a minute there. Now that i see you diagram I can better understand what you want to do. You setup is fine with using the file back-end for your domu''s. if you have the mount point setup on the server with the drbd device and you use file back-end in the domu''s config. The confusion came from you trying to use the drbd script in your domU''s config, The reason you was getting that error is because you can''t use drbd script your setup. This is just my setup on one node. I don''t understand what you''re telling me here! what is /dev/XEN00/proxy? <- This an example of my domU lv location thats linked with drbd.config. raid10 ----> lv -> drbd (proxy) -> Proxy DomU \-> lv -> drbd (sugarcrm) -> Sugarcrm DomU -- View this message in context: http://xen.1045712.n5.nabble.com/using-DRBD-VBDs-with-Xen-tp3046483p3047740.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Sep 30, 2010 at 11:48 PM, Jean-Francois Malouin <Jean-Francois.Malouin@bic.mni.mcgill.ca> wrote:> * khris4 <khris4@gmail.com> [20100930 17:19]: >> >> okay, now my memory is coming back on setting up DRBD. The reason it''s not >> working is because your tell xen to start the domu with drbd script. If you >> look at the block-drbd script in the /etc/xen/script/ directory to wil see >> the drbd only handles making the drbd resource primary and then hands off >> the device listed under disk in your drbd config to the domu to boot the >> operation system. Let give you an example, >> >> Here is my setup at my office. >> >> hardware raid10 --> lvm -> drbd -> domu <- what this mean is my server has >> raid into setup in lvm, then I create lv''s for each >> drbd virtual disk I wanted to create. so /dev/XEN00/proxy --> is setup as a >> disk in drbd.conf as resource proxy. Now when you look in my domU proxy >> config you will see this disk = [ ''drbd:proxy,xvda,w'' ]. This works because >> each virtual drbd resource has a lv parition with an operating system behind >> it. >> >> The reason your setup is not working is because you handing the domu an drbd >> virtual device that has file-system behind it and when you domu boot up xen >> doesn''t know it need to mount the drbd and then give a file back-end. You >> going to need to write a script to handle some of this work or ask if anyone >> had a script already made. > > hmmm, now I''m really confused. > > This is what I''m doing on one node: > > raid1 ----> lv -> drbd (r1) -> ocfs2 (mount point /xen_cluster/r1) > \-> lv -> drbd (r2) -> ocfs2 (mount point /xen_cluster/r2) > > so the VBD should look like > > disk = [ ''file:/xen_cluster/r1/xennode-1/disk.img,xvda2,w'', > ''file:/xen_cluster/r2/xennode-1/swap.img,xvda1,w'', ] > > as I''ve read in the drbd user guide on how to integrate drbd, ocfs2 > and Xen. If I use that VBD file: type in the DomU config I can perform > live migration. > > I don''t understand what you''re telling me here! > what is /dev/XEN00/proxy? > > sorry for being dense, > jfI experienced the same problem and discovered this link: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=588406 It seems there is an error in /etc/xen/scripts/block for Xen packaged for Debian Squeeze, perform the changes mentioned in that link and I believe you will be able to use your drbd resource as a disk - it worked for me. Regards, Vidar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users