Hi all, I set up two systems with centos5 64 bit, xen 3.2 (rebuit from src.rpm), drbd. At first I installed a centos4.5 32 bit using the device /dev/drbd0 (is it possible to use the drbd resource at this step?) and then I dump the configuration, I changed the driver name, source dev and I added the kernel, ramdisk and root parameters. This is my configuration xml <domain type=''xen'' id=''-1''> <name>SLSPTEST</name> <uuid>10147595b176607d804d0e1dc1d2103d</uuid> <bootloader>/usr/bin/pygrub</bootloader> <os> <type>linux</type> </os> <memory>2097152</memory> <vcpu>1</vcpu> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <interface type=''bridge''> <source bridge=''xenbr0''/> <mac address=''00:16:3e:44:d3:9b''/> </interface> <disk type=''block'' device=''disk''> <driver name=''drbd''/> <source dev=''r0''/> <target dev=''xvda''/> </disk> </devices> <kernel>/boot/vmlinuz-2.6.9-67.0.7.ELxenU</kernel> <ramdisk>/boot/initrd-2.6.9-67.0.7.ELxenU.img</ramdisk> <root>ro root=/dev/VolGroup00/LogVol00 console=xvc0 selinux=0</root> </domain> The drbd configuration is: global { usage-count yes; } common { protocol C; disk { on-io-error detach; } syncer { verify-alg md5; rate 50M; } } resource r0 { startup { become-primary-on both; } net { allow-two-primaries; } on hyp11.infolan { device /dev/drbd0; disk /dev/HYP11VM/VMNAME; address 10.100.0.2:7788; meta-disk internal; } on hyp10.infolan { device /dev/drbd0; disk /dev/HYP10VM/VMNAME; address 10.100.0.1:7788; meta-disk internal; } } Everythig seems to be ready: I loaded the configuration file successfully with virsh define SLSPTEST and the drbd resource is set up according to the drdb guide (dual primary mode enabled). [root@hyp10 scripts]# cat /proc/drbd version: 8.2.5 (api:88/proto:86-88) GIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by buildsvn@c5-x8664-build, 2008-03-09 10:16:12 0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r--- ns:12539333 nr:0 dw:1005385 dr:11578691 al:558 bm:704 lo:0 pe:0 ua:0 ap:0 resync: used:0/31 hits:720168 misses:704 starving:0 dirty:0 changed:704 act_log: used:0/127 hits:272661 misses:558 starving:0 dirty:0 changed:558 Unluckly when I execute xm start SLSPTEST I get Error: Disk isn''t accessible The xend log is 2008-04-24 16:36:56 8572] ERROR (XendBootloader:43) Disk isn''t accessible [2008-04-24 16:36:56 8572] ERROR (XendDomainInfo:440) VM start failed Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 420, in start XendTask.log_progress(31, 60, self._initDomain) File "/usr/lib64/python2.4/site-packages/xen/xend/XendTask.py", line 209, in log_progress retval = func(*args, **kwds) File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1694, in _initDomain self._configureBootloader() File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 2050, in _configureBootloa der bootloader_args, kernel, ramdisk, args) File "/usr/lib64/python2.4/site-packages/xen/xend/XendBootloader.py", line 44, in bootloader raise VmError(msg) VmError: Disk isn''t accessible [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1883) XendDomainInfo.destroy: domid=12 [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1900) XendDomainInfo.destroyDomain(12) [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1524) No device model [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1526) Releasing devices How could I solve this problem? I want to use the suggested configuration using the drbd driver but it does''t work. Thanks Marco _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ross S. W. Walker
2008-Apr-24 14:52 UTC
RE: [Xen-users] drbd, xen and disk not accessible..
Marco Strullato wrote:> > Hi all, > I set up two systems with centos5 64 bit, xen 3.2 (rebuit > from src.rpm), drbd. > At first I installed a centos4.5 32 bit using the device /dev/drbd0 > (is it possible to use the drbd resource at this step?) and then I > dump the configuration, I changed the driver name, source dev and I > added the kernel, ramdisk and root parameters. > > This is my configuration xml > > <domain type=''xen'' id=''-1''> > <name>SLSPTEST</name> > <uuid>10147595b176607d804d0e1dc1d2103d</uuid> > <bootloader>/usr/bin/pygrub</bootloader> > <os> > <type>linux</type> > </os> > <memory>2097152</memory> > <vcpu>1</vcpu> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>restart</on_crash> > <devices> > <interface type=''bridge''> > <source bridge=''xenbr0''/> > <mac address=''00:16:3e:44:d3:9b''/> > </interface> > <disk type=''block'' device=''disk''> > <driver name=''drbd''/> > <source dev=''r0''/> > <target dev=''xvda''/> > </disk> > </devices> > <kernel>/boot/vmlinuz-2.6.9-67.0.7.ELxenU</kernel> > <ramdisk>/boot/initrd-2.6.9-67.0.7.ELxenU.img</ramdisk> > <root>ro root=/dev/VolGroup00/LogVol00 console=xvc0 selinux=0</root> > </domain> > > The drbd configuration is: > > global { > usage-count yes; > } > common { > protocol C; > disk { > on-io-error detach; > } > syncer { > verify-alg md5; > rate 50M; > } > > } > resource r0 { > startup { > become-primary-on both; > } > net { > allow-two-primaries; > } > on hyp11.infolan { > device /dev/drbd0; > disk /dev/HYP11VM/VMNAME; > address 10.100.0.2:7788; > meta-disk internal; > } > on hyp10.infolan { > device /dev/drbd0; > disk /dev/HYP10VM/VMNAME; > address 10.100.0.1:7788; > meta-disk internal; > } > } > > > Everythig seems to be ready: I loaded the configuration file > successfully with virsh define SLSPTEST and the drbd resource is set > up according to the drdb guide (dual primary mode enabled). > > [root@hyp10 scripts]# cat /proc/drbd > version: 8.2.5 (api:88/proto:86-88) > GIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by > buildsvn@c5-x8664-build, 2008-03-09 10:16:12 > 0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r--- > ns:12539333 nr:0 dw:1005385 dr:11578691 al:558 bm:704 > lo:0 pe:0 ua:0 ap:0 > resync: used:0/31 hits:720168 misses:704 starving:0 > dirty:0 changed:704 > act_log: used:0/127 hits:272661 misses:558 starving:0 dirty:0 > changed:558 > > Unluckly when I execute xm start SLSPTEST I get > > Error: Disk isn''t accessible > > The xend log is > > 2008-04-24 16:36:56 8572] ERROR (XendBootloader:43) Disk > isn''t accessible > [2008-04-24 16:36:56 8572] ERROR (XendDomainInfo:440) VM start failed > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > line 420, in start > XendTask.log_progress(31, 60, self._initDomain) > File "/usr/lib64/python2.4/site-packages/xen/xend/XendTask.py", line > 209, in log_progress > retval = func(*args, **kwds) > File > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > line 1694, in _initDomain > self._configureBootloader() > File > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > line 2050, in _configureBootloa > der > bootloader_args, kernel, ramdisk, args) > File > "/usr/lib64/python2.4/site-packages/xen/xend/XendBootloader.py", > line 44, in bootloader > raise VmError(msg) > VmError: Disk isn''t accessible > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1883) > XendDomainInfo.destroy: domid=12 > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1900) > XendDomainInfo.destroyDomain(12) > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1524) No device model > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1526) > Releasing devices > > > How could I solve this problem? I want to use the suggested > configuration using the drbd driver but it does''t work.Marco, When you do an ''rpm -qa | grep xen'' does it show both xen-3.2.0 and xen-libs-3.2.0 as installed? They should be given the dependencies. If so then I would ask on the drbd list why their drbd type doesn''t work as shown on their wiki. Maybe it was excluded from Xen and when they wrote the wiki page they were hoping it would have been adopted. It doesn''t really matter anyways, because listing the device as phy:drbd0 would give you the exact same result, which is attach xenblk on the backend. -Ross ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, thanks for the answer. As you suggested I changed the configuration and the domU starts. Unluckly now I want to migrate the domU to another hypervisor: This is the configuration of one hyp: (xend-unix-server yes) (xend-relocation-server yes) (xend-unix-path /var/lib/xend/xend-socket) (xend-relocation-port 8002) (xend-relocation-address '''') (xend-relocation-hosts-allow '''') (network-script ''network-bridge'') (vif-script vif-bridge) (dom0-min-mem 256) (dom0-cpus 0) (vncpasswd '''') and this is the configuration of the other one: (xend-unix-server yes) (xend-relocation-server yes) (xend-unix-path /var/lib/xend/xend-socket) (xend-relocation-port 8002) (xend-relocation-address '''') (xend-relocation-hosts-allow '''') (network-script network-bridge) (vif-script vif-bridge) (dom0-min-mem 256) (dom0-cpus 0) (vncpasswd '''') The command I execute is [root@hyp10 ~]# xm migrate --live SLSPTEST hyp11 Error: /usr/lib64/xen/bin/xc_save 27 13 0 0 1 failed As you can see the migration fails. This is the error log on the original hyp [2008-04-24 17:02:00 8572] ERROR (XendDomainInfo:1950) XendDomainInfo.resume: xc.domain_resume failed on domain 13. Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1944, in resumeDomain self._createDevices() File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1506, in _createDevices devid = self._createDevice(devclass, config) File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1478, in _createDevice return self.getDeviceController(deviceClass).createDevice(devConfig) File "/usr/lib64/python2.4/site-packages/xen/xend/server/DevController.py", line 113, in createDevice raise VmError("Device %s is already connected." % dev_str) VmError: Device xvda (51712, vbd) is already connected. It seems the domU device xvda is already connected. This error seems due to a drbd lock. What do you think? What should I check? Thanks Marco ps I''m going to write into the drbd mailing list to verify them guide... 2008/4/24, Ross S. W. Walker <rwalker@medallion.com>:> Marco Strullato wrote: > > > > Hi all, > > I set up two systems with centos5 64 bit, xen 3.2 (rebuit > > from src.rpm), drbd. > > At first I installed a centos4.5 32 bit using the device /dev/drbd0 > > (is it possible to use the drbd resource at this step?) and then I > > dump the configuration, I changed the driver name, source dev and I > > added the kernel, ramdisk and root parameters. > > > > This is my configuration xml > > > > <domain type=''xen'' id=''-1''> > > <name>SLSPTEST</name> > > <uuid>10147595b176607d804d0e1dc1d2103d</uuid> > > <bootloader>/usr/bin/pygrub</bootloader> > > <os> > > <type>linux</type> > > </os> > > <memory>2097152</memory> > > <vcpu>1</vcpu> > > <on_poweroff>destroy</on_poweroff> > > <on_reboot>restart</on_reboot> > > <on_crash>restart</on_crash> > > <devices> > > <interface type=''bridge''> > > <source bridge=''xenbr0''/> > > <mac address=''00:16:3e:44:d3:9b''/> > > </interface> > > <disk type=''block'' device=''disk''> > > <driver name=''drbd''/> > > <source dev=''r0''/> > > <target dev=''xvda''/> > > </disk> > > </devices> > > <kernel>/boot/vmlinuz-2.6.9-67.0.7.ELxenU</kernel> > > <ramdisk>/boot/initrd-2.6.9-67.0.7.ELxenU.img</ramdisk> > > <root>ro root=/dev/VolGroup00/LogVol00 console=xvc0 selinux=0</root> > > </domain> > > > > The drbd configuration is: > > > > global { > > usage-count yes; > > } > > common { > > protocol C; > > disk { > > on-io-error detach; > > } > > syncer { > > verify-alg md5; > > rate 50M; > > } > > > > } > > resource r0 { > > startup { > > become-primary-on both; > > } > > net { > > allow-two-primaries; > > } > > on hyp11.infolan { > > device /dev/drbd0; > > disk /dev/HYP11VM/VMNAME; > > address 10.100.0.2:7788; > > meta-disk internal; > > } > > on hyp10.infolan { > > device /dev/drbd0; > > disk /dev/HYP10VM/VMNAME; > > address 10.100.0.1:7788; > > meta-disk internal; > > } > > } > > > > > > Everythig seems to be ready: I loaded the configuration file > > successfully with virsh define SLSPTEST and the drbd resource is set > > up according to the drdb guide (dual primary mode enabled). > > > > [root@hyp10 scripts]# cat /proc/drbd > > version: 8.2.5 (api:88/proto:86-88) > > GIT-hash: 9faf052fdae5ef0c61b4d03890e2d2eab550610c build by > > buildsvn@c5-x8664-build, 2008-03-09 10:16:12 > > 0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r--- > > ns:12539333 nr:0 dw:1005385 dr:11578691 al:558 bm:704 > > lo:0 pe:0 ua:0 ap:0 > > resync: used:0/31 hits:720168 misses:704 starving:0 > > dirty:0 changed:704 > > act_log: used:0/127 hits:272661 misses:558 starving:0 dirty:0 > > changed:558 > > > > Unluckly when I execute xm start SLSPTEST I get > > > > Error: Disk isn''t accessible > > > > The xend log is > > > > 2008-04-24 16:36:56 8572] ERROR (XendBootloader:43) Disk > > isn''t accessible > > [2008-04-24 16:36:56 8572] ERROR (XendDomainInfo:440) VM start failed > > Traceback (most recent call last): > > File > > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > > line 420, in start > > XendTask.log_progress(31, 60, self._initDomain) > > File "/usr/lib64/python2.4/site-packages/xen/xend/XendTask.py", line > > 209, in log_progress > > retval = func(*args, **kwds) > > File > > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > > line 1694, in _initDomain > > self._configureBootloader() > > File > > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > > line 2050, in _configureBootloa > > der > > bootloader_args, kernel, ramdisk, args) > > File > > "/usr/lib64/python2.4/site-packages/xen/xend/XendBootloader.py", > > line 44, in bootloader > > raise VmError(msg) > > VmError: Disk isn''t accessible > > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1883) > > XendDomainInfo.destroy: domid=12 > > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1900) > > XendDomainInfo.destroyDomain(12) > > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1524) No device model > > [2008-04-24 16:36:56 8572] DEBUG (XendDomainInfo:1526) > > Releasing devices > > > > > > How could I solve this problem? I want to use the suggested > > configuration using the drbd driver but it does''t work. > > > Marco, > > When you do an ''rpm -qa | grep xen'' does it show both > xen-3.2.0 and xen-libs-3.2.0 as installed? They should > be given the dependencies. If so then I would ask on > the drbd list why their drbd type doesn''t work as > shown on their wiki. Maybe it was excluded from Xen > and when they wrote the wiki page they were hoping it > would have been adopted. > > It doesn''t really matter anyways, because listing the > device as phy:drbd0 would give you the exact same > result, which is attach xenblk on the backend. > > > -Ross > > ______________________________________________________________________ > This e-mail, and any attachments thereto, is intended only for use by > the addressee(s) named herein and may contain legally privileged > and/or confidential information. If you are not the intended recipient > of this e-mail, you are hereby notified that any dissemination, > distribution or copying of this e-mail, and any attachments thereto, > is strictly prohibited. If you have received this e-mail in error, > please immediately notify the sender and permanently delete the > original and any copy or printout thereof. > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ross S. W. Walker
2008-Apr-24 15:39 UTC
RE: [Xen-users] drbd, xen and disk not accessible..
Marco Strullato wrote:> > Hi, thanks for the answer. > As you suggested I changed the configuration and the domU starts. > > > Unluckly now I want to migrate the domU to another hypervisor: > > This is the configuration of one hyp: > (xend-unix-server yes) > (xend-relocation-server yes) > (xend-unix-path /var/lib/xend/xend-socket) > (xend-relocation-port 8002) > (xend-relocation-address '''') > (xend-relocation-hosts-allow '''') > (network-script ''network-bridge'') > (vif-script vif-bridge) > (dom0-min-mem 256) > (dom0-cpus 0) > (vncpasswd '''') > > and this is the configuration of the other one: > (xend-unix-server yes) > (xend-relocation-server yes) > (xend-unix-path /var/lib/xend/xend-socket) > (xend-relocation-port 8002) > (xend-relocation-address '''') > (xend-relocation-hosts-allow '''') > (network-script network-bridge) > (vif-script vif-bridge) > (dom0-min-mem 256) > (dom0-cpus 0) > (vncpasswd '''') > > The command I execute is > [root@hyp10 ~]# xm migrate --live SLSPTEST hyp11 > Error: /usr/lib64/xen/bin/xc_save 27 13 0 0 1 failed > > As you can see the migration fails. > > This is the error log on the original hyp > [2008-04-24 17:02:00 8572] ERROR (XendDomainInfo:1950) > XendDomainInfo.resume: xc.domain_resume failed on domain 13. > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > line 1944, in resumeDomain > self._createDevices() > File > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > line 1506, in _createDevices > devid = self._createDevice(devclass, config) > File > "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", > line 1478, in _createDevice > return > self.getDeviceController(deviceClass).createDevice(devConfig) > File > "/usr/lib64/python2.4/site-packages/xen/xend/server/DevController.py", > line 113, in createDevice > raise VmError("Device %s is already connected." % dev_str) > VmError: Device xvda (51712, vbd) is already connected. > > It seems the domU device xvda is already connected. This error seems > due to a drbd lock. > > What do you think? What should I check?Ok, so you have the domain in the xenstore on hyp10 and not in the xenstore on hyp11, you run the live migrate and it gets the error. Do I have that right? If the domain is in hyp11, delete it, otherwise they will collide on migration. You can only have 1 resource with a given UUID defined per-hypervisor. -Ross ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Ok, so you have the domain in the xenstore on hyp10 and not in > the xenstore on hyp11, you run the live migrate and it gets the > error. Do I have that right?yes, I confirm I have the domain into the xenstore of hyp10 and not into hyp11. When I execute the migration I get: [root@hyp10 ~]# xm migrate --live SLSPTEST hyp11 Error: /usr/lib64/xen/bin/xc_save 28 14 0 0 1 failed Usage: xm migrate <Domain> <Host> <...cut...> After thist process I have the domain crashed on both server. The state is: hyp10->-b---- hyp11->-bp--- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Could this problem due to a 64/32 bit issue? Both hypervisors are xen 3.2 64 bits and the domain I want to migrate is 32 bit. This could be a problem? I suppose that it wouldn''t be a problem... http://wiki.xensource.com/xenwiki/XenFaq#head-5f7176b3909cb0382cece43a6a8fc25a3a114e93 Marco 2008/4/28 Marco Strullato <marco.strullato@gmail.com>:> > Ok, so you have the domain in the xenstore on hyp10 and not in > > the xenstore on hyp11, you run the live migrate and it gets the > > error. Do I have that right? > > yes, I confirm I have the domain into the xenstore of hyp10 and not into hyp11. > When I execute the migration I get: > > > [root@hyp10 ~]# xm migrate --live SLSPTEST hyp11 > Error: /usr/lib64/xen/bin/xc_save 28 14 0 0 1 failed > Usage: xm migrate <Domain> <Host> > <...cut...> > > After thist process I have the domain crashed on both server. The state is: > hyp10->-b---- > hyp11->-bp--- >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users