Hi,
I am trying to (live-)migrate a paravirtualized machine from hosta to
hostb:
-----8<----
hosta:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 64 384 1 -b---- 0.5
hostb:~ # xm list pvm
Error: Domain ''pvm'' does not exist.
hosta:~ # xm migrate pvm 192.168.0.2
Error: /usr/lib64/xen/bin/xc_save 4 64 0 0 0 failed
Usage: xm migrate <Domain> <Host>
Migrate a domain to another machine.
Options:
-h, --help Print this help.
-l, --live Use live migration.
-p=portnum, --port=portnum
Use specified port for migration.
-r=MBIT, --resource=MBIT
Set level of resource usage for migration.
hosta:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 64 384 1 ---s-- 0.6
hostb:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 61 384 1 --p--- 0.0
-----8<----
As you can see, the machine (called "pvm" got migrated, but on hosta
the status is still "s", which should not happen, as far as I see.
Both VMs are in logical volumes, and the volumes are connected via drbd
in primary/primary mode.
The hosts are OpenSUse 11.0 with xen 3.2.
Since I am a newbie to Xen, I might have missed something - but what?
Any hint is appreciated,
Rainer
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Hello Rainer,
I. By default, Xend does not start a HTTP server. It does start a Unix domain
socket management server for xm to communicate with Xend.
But in order to support cross-machine live migration, this support has to be
enabled.
1. Make a backup of your xend-config.sxp file: (on hotsa)
cp -pr /etc/xen/xend-config.sxp /etc/xen/xend-config.sxp.default
2. Edit /etc/xen/xend-config.sxp and make the following changes:
#(xend-unix-server yes)
(xend-relocation-server yes)
(xend-relocation-port 8002)
(xend-relocation-address '''')
(xend-relocation-hosts-allow '''')
#(xend-relocation-hosts-allow ''^localhost$
^localhost\\.localdomain$'')
3. Restart Xend:
service xend restart
II. Exporting a shared storage via NFS .
Configure NFS and export a shared storage via NFS.
1. Edit /etc/exports and add in the following line:
/xen *(rw,sync,no_root_squash)
2. Save /etc/exports and restart the NFS server. Make sure that the NFS server
starts by default:
service nfs start
chkconfig nfs on
3. After starting the NFS server on hosta, we can then mount it on hostb:
mount hosta:/xen /xen
4. Proceed to start Xen guest on hosta
xm create -c pvm
III. Performing live migration
1. Perform the live migration from hosta to hostb by running the following
command:
xm migrate –live pvm hostb
3. Try to open multiple terminal windows on both Xen hosts with the following
command:
watch -n1 xm list
4. Watch how the live migration is performed. Take note how long it took.
regards
Sri
Rainer Sokoll wrote:
Hi,
I am trying to (live-)migrate a paravirtualized machine from hosta to
hostb:
-----8<----
hosta:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 64 384 1 -b---- 0.5
hostb:~ # xm list pvm
Error: Domain ''pvm'' does not exist.
hosta:~ # xm migrate pvm 192.168.0.2
Error: /usr/lib64/xen/bin/xc_save 4 64 0 0 0 failed
Usage: xm migrate
Migrate a domain to another machine.
Options:
-h, --help Print this help.
-l, --live Use live migration.
-p=portnum, --port=portnum
Use specified port for migration.
-r=MBIT, --resource=MBIT
Set level of resource usage for migration.
hosta:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 64 384 1 ---s-- 0.6
hostb:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 61 384 1 --p--- 0.0
-----8<----
As you can see, the machine (called "pvm" got migrated, but on hosta
the status is still "s", which should not happen, as far as I see.
Both VMs are in logical volumes, and the volumes are connected via drbd
in primary/primary mode.
The hosts are OpenSUse 11.0 with xen 3.2.
Since I am a newbie to Xen, I might have missed something - but what?
Any hint is appreciated,
Rainer
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
http://www.mindtree.com/email/disclaimer.html
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
I''ve had issues like this, as well, and have never been able to pin
down
what exactly causes it. In my experience, the (live) migration works
most of the time, but occasionally I get something exactly like what
you''re seeing here. Is the hardware (specifically the CPUs) very
similar on your hosta and hostb machines? What kind of shared storage
are you using?
-Nick
-----Original Message-----
From: Rainer Sokoll <rainer@sokoll.com>
To: xen-users@lists.xensource.com
Subject: [Xen-users] xm migrate headache
Date: Tue, 3 Mar 2009 12:28:38 +0100
Hi,
I am trying to (live-)migrate a paravirtualized machine from hosta to
hostb:
-----8<----
hosta:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 64 384 1 -b---- 0.5
hostb:~ # xm list pvm
Error: Domain ''pvm'' does not exist.
hosta:~ # xm migrate pvm 192.168.0.2
Error: /usr/lib64/xen/bin/xc_save 4 64 0 0 0 failed
Usage: xm migrate <Domain> <Host>
Migrate a domain to another machine.
Options:
-h, --help Print this help.
-l, --live Use live migration.
-p=portnum, --port=portnum
Use specified port for migration.
-r=MBIT, --resource=MBIT
Set level of resource usage for migration.
hosta:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 64 384 1 ---s-- 0.6
hostb:~ # xm list pvm
Name ID Mem VCPUs State Time(s)
pvm 61 384 1 --p--- 0.0
-----8<----
As you can see, the machine (called "pvm" got migrated, but on hosta
the status is still "s", which should not happen, as far as I see.
Both VMs are in logical volumes, and the volumes are connected via drbd
in primary/primary mode.
The hosts are OpenSUse 11.0 with xen 3.2.
Since I am a newbie to Xen, I might have missed something - but what?
Any hint is appreciated,
Rainer
This e-mail may contain confidential and privileged material for the sole use of
the intended recipient. If this email is not intended for you, or you are not
responsible for the delivery of this message to the intended recipient, please
note that this message may contain SEAKR Engineering (SEAKR)
Privileged/Proprietary Information. In such a case, you are strictly prohibited
from downloading, photocopying, distributing or otherwise using this message,
its contents or attachments in any way. If you have received this message in
error, please notify us immediately by replying to this e-mail and delete the
message from your mailbox. Information contained in this message that does not
relate to the business of SEAKR is neither endorsed by nor attributable to
SEAKR.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
I don't think the issues Rainer is having are due to misconfiguration of
the relocation service or shared storage. If it were a shared storage
issue or a relocation service issue, Xen would not even attempt to save
the VM state, which is where the migration is failing. Xen checks to
make sure the destination machine has the relocation service available,
and checks to make sure that the other machine can see the storage
before it ever does a state save.
I've experienced exactly the same symptoms as he describes, but it also
works for me some of the time.
-Nick
-----Original Message-----
From: Srivathsa <srivathsa_karanth@mindtree.com>
To: Rainer Sokoll <rainer@sokoll.com>
Cc: xen-users@lists.xensource.com <xen-users@lists.xensource.com>
Subject: Re: [Xen-users] xm migrate headache
Date: Tue, 3 Mar 2009 18:26:27 +0530
Hello Rainer,
I. By default, Xend does not start a HTTP server. It does start a Unix
domain socket management server for xm to communicate with Xend.
But in order to support cross-machine live migration, this support has
to be enabled.
1. Make a backup of your xend-config.sxp file: (on hotsa)
cp -pr /etc/xen/xend-config.sxp /etc/xen/xend-config.sxp.default
2. Edit /etc/xen/xend-config.sxp and make the following changes:
#(xend-unix-server yes)
(xend-relocation-server yes)
(xend-relocation-port 8002)
(xend-relocation-address '')
(xend-relocation-hosts-allow '')
#(xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain
$')
3. Restart Xend:
service xend restart
II. Exporting a shared storage via NFS .
Configure NFS and export a shared storage via NFS.
1. Edit /etc/exports and add in the following line:
/xen *(rw,sync,no_root_squash)
2. Save /etc/exports and restart the NFS server. Make sure that the
NFS server starts by default:
service nfs start
chkconfig nfs on
3. After starting the NFS server on hosta, we can then mount it on
hostb:
mount hosta:/xen /xen
4. Proceed to start Xen guest on hosta
xm create -c pvm
III. Performing live migration
1. Perform the live migration from hosta to hostb by running the
following command:
xm migrate –live pvm hostb
3. Try to open multiple terminal windows on both Xen hosts with the
following command:
watch -n1 xm list
4. Watch how the live migration is performed. Take note how long it
took.
regards
Sri
Rainer Sokoll wrote:
> Hi,
>
> I am trying to (live-)migrate a paravirtualized machine from hosta to
> hostb:
>
> -----8<----
> hosta:~ # xm list pvm
> Name ID Mem VCPUs State
Time(s)
> pvm 64 384 1 -b----
0.5
>
> hostb:~ # xm list pvm
> Error: Domain 'pvm' does not exist.
>
> hosta:~ # xm migrate pvm 192.168.0.2
> Error: /usr/lib64/xen/bin/xc_save 4 64 0 0 0 failed
> Usage: xm migrate <Domain> <Host>
>
> Migrate a domain to another machine.
>
> Options:
>
> -h, --help Print this help.
> -l, --live Use live migration.
> -p=portnum, --port=portnum
> Use specified port for migration.
> -r=MBIT, --resource=MBIT
> Set level of resource usage for migration.
>
> hosta:~ # xm list pvm
> Name ID Mem VCPUs State
Time(s)
> pvm 64 384 1 ---s--
0.6
>
> hostb:~ # xm list pvm
> Name ID Mem VCPUs State
Time(s)
> pvm 61 384 1 --p---
0.0
> -----8<----
>
> As you can see, the machine (called "pvm" got migrated, but on
hosta
> the status is still "s", which should not happen, as far as I
see.
> Both VMs are in logical volumes, and the volumes are connected via drbd
> in primary/primary mode.
> The hosts are OpenSUse 11.0 with xen 3.2.
> Since I am a newbie to Xen, I might have missed something - but what?
> Any hint is appreciated,
>
> Rainer
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xensource.com
> http://lists.xensource.com/xen-users
>
>
________________________________________________________________________
http://www.mindtree.com/email/disclaimer.html
This e-mail may contain confidential and privileged material for the sole use of
the intended recipient. If this email is not intended for you, or you are not
responsible for the delivery of this message to the intended recipient, please
note that this message may contain SEAKR Engineering (SEAKR)
Privileged/Proprietary Information. In such a case, you are strictly prohibited
from downloading, photocopying, distributing or otherwise using this message,
its contents or attachments in any way. If you have received this message in
error, please notify us immediately by replying to this e-mail and delete the
message from your mailbox. Information contained in this message that does not
relate to the business of SEAKR is neither endorsed by nor attributable to
SEAKR.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
On Tue, Mar 03, 2009 at 06:32:27AM -0700, Nick Couchman wrote: Hi,> Is the hardware (specifically the CPUs) very > similar on your hosta and hostb machines?The machines are 100% the same.> What kind of shared storage are you using?Both machines have their own local storage. I use LVM. The logical volumes on hosta and hostb are connected via drbd in primary/primary mode. Rainer _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Mar 03, 2009 at 06:26:27PM +0530, Srivathsa wrote:> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> > <html> > <head>Errmmm....> 1. Make a backup of your xend-config.sxp file: (on hotsa)<br> > cp -pr /etc/xen/xend-config.sxp /etc/xen/xend-config.sxp.default<br> > <br> > 2. Edit /etc/xen/xend-config.sxp and make the following changes:<br> > #(xend-unix-server yes)<br> > (xend-relocation-server yes)<br> > (xend-relocation-port 8002)<br> > (xend-relocation-address '''')<br> > (xend-relocation-hosts-allow '''')<br> > #(xend-relocation-hosts-allow ''^localhost$ ^localhost\\.localdomain$'')<br> > 3. Restart Xend:<br>No, I do not have a relocation-server issue. Rainer _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
What filesystem are you using on the DRBD-based storage? -----Original Message----- From: Rainer Sokoll <rainer@sokoll.com> To: Nick Couchman <Nick.Couchman@seakr.com> Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] xm migrate headache Date: Tue, 3 Mar 2009 14:47:41 +0100 On Tue, Mar 03, 2009 at 06:32:27AM -0700, Nick Couchman wrote: Hi,> Is the hardware (specifically the CPUs) very > similar on your hosta and hostb machines?The machines are 100% the same.> What kind of shared storage are you using?Both machines have their own local storage. I use LVM. The logical volumes on hosta and hostb are connected via drbd in primary/primary mode. Rainer This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Mar 03, 2009 at 06:50:29AM -0700, Nick Couchman wrote:> What filesystem are you using on the DRBD-based storage?ext3 Rainer _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
This could be really, really bad. I''m actually surprised you haven''t experienced any issues with the filesystem already. In a active/active (primary/primary) DRBD setup, my understanding is that you need to treat this exactly the same as if you had a SAN volume you were presenting to more than one host, which means a cluster-aware filesystem. I don''t know that this is what''s causing your (live) migration issues, but it probably won''t be long before you start seeing some filesystem corruption on your DRBD volume. See the following page: http://www.drbd.org/users-guide/s-dual-primary-mode.html -Nick -----Original Message----- From: Rainer Sokoll <rainer@sokoll.com> To: Nick Couchman <Nick.Couchman@seakr.com> Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] xm migrate headache Date: Tue, 3 Mar 2009 14:53:55 +0100 On Tue, Mar 03, 2009 at 06:50:29AM -0700, Nick Couchman wrote:> What filesystem are you using on the DRBD-based storage?ext3 Rainer This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Mar 03, 2009 at 06:57:28AM -0700, Nick Couchman wrote:
[drbd in primary/primary]
I was not really correct - there is no filesystem at all in my drbd
devices:
-------8<-------
hosta:~ # fdisk -l /dev/drbd4
Disk /dev/drbd4: 2147 MB, 2147381248 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d0023
Device Boot Start End Blocks Id System
/dev/drbd4p1 1 28 224878+ 82 Linux swap / Solaris
/dev/drbd4p2 29 261 1871572+ 83 Linux
-------8<-------
The same at hostb.
So. my drbd devices contain partitions, and these partitions contain a
filesystem (ext3 and swap, in my case)
> See the following page:
> http://www.drbd.org/users-guide/s-dual-primary-mode.html
Now I am really confused :-)
Should I switch to drbd-user mailing list?
Rainer
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
So, in your Xen configs, are you using phy: to access the DRBD devices
directly? If so, then you should be okay without a cluster-aware
filesystem - sorry about that, I was under the impression that you were
mounting the DRBD device on the dom0s and then storing disk files on it.
-Nick
-----Original Message-----
From: Rainer Sokoll <rainer@sokoll.com>
To: Nick Couchman <Nick.Couchman@seakr.com>
Cc: xen-users@lists.xensource.com
Subject: Re: [Xen-users] xm migrate headache
Date: Tue, 3 Mar 2009 15:23:36 +0100
On Tue, Mar 03, 2009 at 06:57:28AM -0700, Nick Couchman wrote:
[drbd in primary/primary]
I was not really correct - there is no filesystem at all in my drbd
devices:
-------8<-------
hosta:~ # fdisk -l /dev/drbd4
Disk /dev/drbd4: 2147 MB, 2147381248 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d0023
Device Boot Start End Blocks Id System
/dev/drbd4p1 1 28 224878+ 82 Linux swap / Solaris
/dev/drbd4p2 29 261 1871572+ 83 Linux
-------8<-------
The same at hostb.
So. my drbd devices contain partitions, and these partitions contain a
filesystem (ext3 and swap, in my case)
> See the following page:
> http://www.drbd.org/users-guide/s-dual-primary-mode.html
Now I am really confused :-)
Should I switch to drbd-user mailing list?
Rainer
This e-mail may contain confidential and privileged material for the sole use of
the intended recipient. If this email is not intended for you, or you are not
responsible for the delivery of this message to the intended recipient, please
note that this message may contain SEAKR Engineering (SEAKR)
Privileged/Proprietary Information. In such a case, you are strictly prohibited
from downloading, photocopying, distributing or otherwise using this message,
its contents or attachments in any way. If you have received this message in
error, please notify us immediately by replying to this e-mail and delete the
message from your mailbox. Information contained in this message that does not
relate to the business of SEAKR is neither endorsed by nor attributable to
SEAKR.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Rainer Sokoll a écrit :> On Tue, Mar 03, 2009 at 06:57:28AM -0700, Nick Couchman wrote: > > [drbd in primary/primary] > > I was not really correct - there is no filesystem at all in my drbd > devices: > > -------8<------- > hosta:~ # fdisk -l /dev/drbd4 > > Disk /dev/drbd4: 2147 MB, 2147381248 bytes > 255 heads, 63 sectors/track, 261 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Disk identifier: 0x000d0023 > > Device Boot Start End Blocks Id System > /dev/drbd4p1 1 28 224878+ 82 Linux swap / Solaris > /dev/drbd4p2 29 261 1871572+ 83 Linux > -------8<------- > > The same at hostb. > So. my drbd devices contain partitions, and these partitions contain a > filesystem (ext3 and swap, in my case) >Maybe this setup falls into the one described in /etc/drbd.conf (from the "disk" sub-section)? # In some special circumstances the device mapper stack manages to # pass BIOs to DRBD that violate the constraints that are set forth # by DRBD''s merge_bvec() function and which have more than one bvec. # A known example is: # phys-disk -> DRBD -> LVM -> Xen -> missaligned partition (63) -> DomU FS # Then you might see "bio would need to, but cannot, be split:" in # the Dom0''s kernel log. # The best workaround is to proper align the partition within # the VM (E.g. start it at sector 1024). (Costs 480 KiByte of storage) # Unfortunately the default of most Linux partitioning tools is # to start the first partition at an odd number (63). Therefore # most distribution''s install helpers for virtual linux machines will # end up with missaligned partitions. # The second best workaround is to limit DRBD''s max bvecs per BIO # (= max-bio-bvecs) to 1. (Costs performance). # max-bio-bvecs 1; Maybe you could try with the option "max-bio-bvecs 1;" as described... -- Maxim Doucet (maxim@alamaison.fr) sys engineer @ la maison +33 (0)1 41 12 2000 www.alamaison.fr _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Mar 03, 2009 at 07:27:18AM -0700, Nick Couchman wrote:> So, in your Xen configs, are you using phy: to access the DRBD devices > directly?Yes, you are correct. I''ve used a file based storage backend before, but it was too slow. drbd.conf: ----8<---- resource stunnel { on jitxen01 { address 192.168.0.1:7794; device /dev/drbd4; disk /dev/XEN/stunnel; meta-disk internal; } on jitxen02 { address 192.168.0.2:7794; device /dev/drbd4; disk /dev/XEN/stunnel; meta-disk internal; } net { allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; } } ----8<---- machine config in xen: ----8<---- name="stunnel" [...] disk=[ ''phy:/dev/drbd4,xvda,w'', ] ----8<----> If so, then you should be okay without a cluster-aware > filesystemNow I feel much better :-)> - sorry about that, I was under the impression that you were > mounting the DRBD device on the dom0s and then storing disk files on it.This sounds like a not-so-clever idea to me :-) Rainer _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello Rainer, I''ve got pretty the same issue (drbd 8.3.0): I''ve got a huge drbd in pri/pri-role. Then ontop LVM with xfs in the LV''s. Now the XM-VM boots into the LV, no prob. If i now want to migrate, i get the same error as you, but in my case, the drbd logs something like "split-brain detected -> disconnected" in dmesg, perhaps you''ve got the same? drbd0: Split-Brain detected, dropping connection! drbd0: self 20D88E3F20F7E8C9:11227E17F1A34EBD:F14436F7DEC14D2E:D51BA840A9E19E2D drbd0: peer 5664952031DE8E53:11227E17F1A34EBD:F14436F7DEC14D2E:D51BA840A9E19E2D drbd0: helper command: /sbin/drbdadm split-brain minor-0 drbd0: helper command: /sbin/drbdadm split-brain minor-0 exit code 0 (0x0) drbd0: conn( WFReportParams -> Disconnecting ) drbd0: error receiving ReportState, l: 4! drbd0: asender terminated drbd0: Terminating asender thread drbd0: Connection closed drbd0: conn( Disconnecting -> StandAlone ) And on the other node drbd0: meta connection shut down by peer. drbd0: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) drbd0: asender terminated drbd0: Terminating asender thread drbd0: sock was shut down by peer drbd0: short read expecting header on sock: r=0 drbd0: Creating new current UUID drbd0: Connection closed drbd0: conn( NetworkFailure -> Unconnected ) drbd0: receiver terminated It seems, that while migrating there is some overlapping write access to the drbd, what let''s drbd decide to have s split-brain. Even if i after-sb-0pri discard-zero-changes; after-sb-1pri violently-as0p; after-sb-2pri violently-as0p; I get this log-entries.... So that''s the point where a DLM-aware filesystem comes into play. The question is now, how we have to "mix" drbd, lvm and ocfs2/gfs for a working playground... ;) I previously had implemented the drbd -> ocfs2 -> sparse-files-way, that one failed with drbd-split-brain, too. Perhaps drbd -> lvm -> ocfs2 and mounting that volume in the vm could do the trick? Cheers, Florian> -----Ursprüngliche Nachricht----- > Von: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] Im Auftrag von > Rainer Sokoll > Gesendet: Dienstag, 3. März 2009 15:36 > An: Nick Couchman > Cc: xen-users@lists.xensource.com > Betreff: Re: [Xen-users] xm migrate headache > > On Tue, Mar 03, 2009 at 07:27:18AM -0700, Nick Couchman wrote: > > > So, in your Xen configs, are you using phy: to access the > DRBD devices > > directly? > > Yes, you are correct. I''ve used a file based storage backend > before, but it was too slow. > > drbd.conf: > ----8<---- > resource stunnel { > on jitxen01 { > address 192.168.0.1:7794; > device /dev/drbd4; > disk /dev/XEN/stunnel; > meta-disk internal; > } > on jitxen02 { > address 192.168.0.2:7794; > device /dev/drbd4; > disk /dev/XEN/stunnel; > meta-disk internal; > } > net { > allow-two-primaries; > after-sb-0pri discard-zero-changes; > after-sb-1pri discard-secondary; > } > } > ----8<---- > > machine config in xen: > > ----8<---- > name="stunnel" > [...] > disk=[ ''phy:/dev/drbd4,xvda,w'', ] > ----8<---- > > > If so, then you should be okay without a cluster-aware filesystem > > Now I feel much better :-) > > > - sorry about that, I was under the impression that you > were mounting > > the DRBD device on the dom0s and then storing disk files on it. > > This sounds like a not-so-clever idea to me :-) > > Rainer > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >********************************************************************************************** IMPORTANT: The contents of this email and any attachments are confidential. They are intended for the named recipient(s) only. If you have received this email in error, please notify the system manager or the sender immediately and do not disclose the contents to anyone or make copies thereof. *** eSafe scanned this email for viruses, vandals, and malicious content. *** ********************************************************************************************** _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users