sanjay kushwaha
2006-Sep-01 21:36 UTC
[Xen-devel] Live vm migration broken in latest xen-unstable
Folks,
I am experiencing that live migration is not working in latest xen-unstable.
I get the following message during migration
[root@pc5 ksanjay]# xm migrate --live 1 199.77.138.23
Error: /usr/lib/xen/bin/xc_save 18 1 0 0 1 failed
[root@pc5 ksanjay]#
I traced the problem to a function in xen named set_sh_allocation() in file
xen/arch/x86/mm/shadow/common.c
tools/libxc/xc_linux_save.c:xc_linux_save() is called from the python script
which makes the following hypercall
if (live) {
if (xc_shadow_control(xc_handle, dom,
XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
NULL, 0, NULL, 0, NULL) < 0) {
ERR("Couldn''t enable shadow mode");
goto out;
}
last_iter = 0;
} else {
-----------
this particular hypercall leads to the call of set_sh_allocation which fails
in the following code
if ( d->arch.shadow.total_pages < pages )
{
/* Need to allocate more memory from domheap */
pg = alloc_domheap_pages(NULL, SHADOW_MAX_ORDER, 0);
if ( pg == NULL )
{
SHADOW_PRINTK("failed to allocate shadow pages.\n");
return -ENOMEM;
}
alloc_domheap_pages fails and returns NULL. however I think I have enough
memory available so this function should not fail.
Is there anybody else experiencing the same problem? Could someone please
tell me how to fix it?
Below is the xm info for my machine.
Thanks for your help,
Sanjay
[root@pc5 ~]# xm info
host : pc5
release : 2.6.16.13-xen0
version : #3 Fri Sep 1 17:13:13 EDT 2006
machine : i686
nr_cpus : 4
nr_nodes : 1
sockets_per_node : 2
cores_per_socket : 1
threads_per_core : 2
cpu_mhz : 2791
hw_caps : bfebfbff:00000000:00000000:00000080:00004400
total_memory : 511
free_memory : 1
xen_major : 3
xen_minor : 0
xen_extra : -unstable
xen_caps : xen-3.0-x86_32
xen_pagesize : 4096
platform_params : virt_start=0xfc000000
xen_changeset : Mon Aug 28 08:08:41 2006 +0100 11267:68a1b61ecd28
cc_compiler : gcc version 4.0.2 20051125 (Red Hat 4.0.2-8)
cc_compile_by : root
cc_compile_domain : netlab.cc.gatech.edu
cc_compile_date : Fri Sep 1 17:12:28 EDT 2006
xend_config_format : 2
[root@pc5 ~]#
--
----------------------
PhD Student, Georgia Tech
http://www.cc.gatech.edu/~ksanjay/
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Ewan Mellor
2006-Sep-05 15:57 UTC
Re: [Xen-devel] Live vm migration broken in latest xen-unstable
On Fri, Sep 01, 2006 at 05:36:00PM -0400, sanjay kushwaha wrote:> Folks, > I am experiencing that live migration is not working in latest > xen-unstable. I get the following message during migration > > [root@pc5 ksanjay]# xm migrate --live 1 [1]199.77.138.23 > Error: /usr/lib/xen/bin/xc_save 18 1 0 0 1 failed > [root@pc5 ksanjay]# > > I traced the problem to a function in xen named set_sh_allocation() in > file xen/arch/x86/mm/shadow/common.c > > tools/libxc/xc_linux_save.c:xc_linux_save() is called from the python > script which makes the following hypercall > > if (live) { > if (xc_shadow_control(xc_handle, dom, > XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY, > NULL, 0, NULL, 0, NULL) < 0) { > ERR("Couldn''t enable shadow mode"); > goto out; > } > last_iter = 0; > } else { > ----------- > > this particular hypercall leads to the call of set_sh_allocation which > fails in the following code > > if ( d-> arch.shadow.total_pages < pages ) > { > /* Need to allocate more memory from domheap */ > pg = alloc_domheap_pages(NULL, SHADOW_MAX_ORDER, 0); > if ( pg == NULL ) > { > SHADOW_PRINTK("failed to allocate shadow pages.\n"); > return -ENOMEM; > } > > alloc_domheap_pages fails and returns NULL. however I think I have enough > memory available so this function should not fail. > > Is there anybody else experiencing the same problem? Could someone please > tell me how to fix it?I''ve put some changes into xen-unstable today which might help. The last fix is on its way through testing now. Look out for xen-unstable changeset 11422, and try that, see how you get on. Cheers, Ewan. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
sanjay kushwaha
2006-Sep-05 19:09 UTC
Re: [Xen-devel] Live vm migration broken in latest xen-unstable
Hi Ewan, I did a "hg pull -u" on my tree which also got the changeset 11422. but I am still facing the same problem. btw this changeset seems to be specific to hvm domain while I am facing this problem with paravirtualized domain. Thanks, Sanjay On 9/5/06, Ewan Mellor <ewan@xensource.com> wrote:> > On Fri, Sep 01, 2006 at 05:36:00PM -0400, sanjay kushwaha wrote: > > > Folks, > > I am experiencing that live migration is not working in latest > > xen-unstable. I get the following message during migration > > > > [root@pc5 ksanjay]# xm migrate --live 1 [1]199.77.138.23 > > Error: /usr/lib/xen/bin/xc_save 18 1 0 0 1 failed > > [root@pc5 ksanjay]# > > > > I traced the problem to a function in xen named set_sh_allocation() in > > file xen/arch/x86/mm/shadow/common.c > > > > tools/libxc/xc_linux_save.c:xc_linux_save() is called from the python > > script which makes the following hypercall > > > > if (live) { > > if (xc_shadow_control(xc_handle, dom, > > XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY, > > NULL, 0, NULL, 0, NULL) < 0) { > > ERR("Couldn''t enable shadow mode"); > > goto out; > > } > > last_iter = 0; > > } else { > > ----------- > > > > this particular hypercall leads to the call of set_sh_allocation which > > fails in the following code > > > > if ( d-> arch.shadow.total_pages < pages ) > > { > > /* Need to allocate more memory from domheap */ > > pg = alloc_domheap_pages(NULL, SHADOW_MAX_ORDER, 0); > > if ( pg == NULL ) > > { > > SHADOW_PRINTK("failed to allocate shadow pages.\n"); > > return -ENOMEM; > > } > > > > alloc_domheap_pages fails and returns NULL. however I think I have > enough > > memory available so this function should not fail. > > > > Is there anybody else experiencing the same problem? Could someone > please > > tell me how to fix it? > > I''ve put some changes into xen-unstable today which might help. The last > fix > is on its way through testing now. Look out for xen-unstable changeset > 11422, and try that, see how you get on. > > Cheers, > > Ewan. >-- ---------------------- PhD Student, Georgia Tech http://www.cc.gatech.edu/~ksanjay/ <http://www.cc.gatech.edu/%7Eksanjay/> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Yoshiaki Tamura
2006-Sep-07 02:28 UTC
Re: [Xen-devel] Live vm migration broken in latest xen-unstable
sanjay kushwaha wrote:> Hi Ewan, > I did a "hg pull -u" on my tree which also got the changeset 11422. but > I am > still facing the same problem. btw this changeset seems to be specific to > hvm domain while I am facing this problem with paravirtualized domain. > > Thanks, > SanjayI''ve tested live migration with paravirt domain for changeset 11429, but didn''t have such a problem. Did you also rebuild xen and dom0 kernel after you updated the repository? If not, I would recommend to do so. Yoshi Tamura> > On 9/5/06, Ewan Mellor <ewan@xensource.com> wrote: >> >> On Fri, Sep 01, 2006 at 05:36:00PM -0400, sanjay kushwaha wrote: >> >> > Folks, >> > I am experiencing that live migration is not working in latest >> > xen-unstable. I get the following message during migration >> > >> > [root@pc5 ksanjay]# xm migrate --live 1 [1]199.77.138.23 >> > Error: /usr/lib/xen/bin/xc_save 18 1 0 0 1 failed >> > [root@pc5 ksanjay]# >> > >> > I traced the problem to a function in xen named set_sh_allocation() in >> > file xen/arch/x86/mm/shadow/common.c >> > >> > tools/libxc/xc_linux_save.c:xc_linux_save() is called from the python >> > script which makes the following hypercall >> > >> > if (live) { >> > if (xc_shadow_control(xc_handle, dom, >> > XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY, >> > NULL, 0, NULL, 0, NULL) < 0) { >> > ERR("Couldn''t enable shadow mode"); >> > goto out; >> > } >> > last_iter = 0; >> > } else { >> > ----------- >> > >> > this particular hypercall leads to the call of set_sh_allocation which >> > fails in the following code >> > >> > if ( d-> arch.shadow.total_pages < pages ) >> > { >> > /* Need to allocate more memory from domheap */ >> > pg = alloc_domheap_pages(NULL, SHADOW_MAX_ORDER, 0); >> > if ( pg == NULL ) >> > { >> > SHADOW_PRINTK("failed to allocate shadow pages.\n"); >> > return -ENOMEM; >> > } >> > >> > alloc_domheap_pages fails and returns NULL. however I think I have >> enough >> > memory available so this function should not fail. >> > >> > Is there anybody else experiencing the same problem? Could someone >> please >> > tell me how to fix it? >> >> I''ve put some changes into xen-unstable today which might help. The last >> fix >> is on its way through testing now. Look out for xen-unstable changeset >> 11422, and try that, see how you get on. >> >> Cheers, >> >> Ewan. >> > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel-- TAMURA, Yoshiaki NTT Cyber Space Labs OSS Computing Project Kernel Group E-mail: tamura.yoshiaki@lab.ntt.co.jp TEL: (046)-859-2771 FAX: (046)-855-1152 Address: 1-1 Hikarinooka, Yokosuka Kanagawa 239-0847 JAPAN _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2006-Sep-07 21:29 UTC
Re: [Xen-devel] Live vm migration broken in latest xen-unstable
Hi Sanjay, Does the attached patch fix this problem for you? It tries to make sure there is enough spare memory to enable shadow pagetables on the domain before starting the migration. Cheers, Tim At 15:09 -0400 on 05 Sep (1157468999), sanjay kushwaha wrote:> Hi Ewan, > I did a "hg pull -u" on my tree which also got the changeset 11422. but I am > still facing the same problem. btw this changeset seems to be specific to > hvm domain while I am facing this problem with paravirtualized domain. > > Thanks, > Sanjay > > On 9/5/06, Ewan Mellor <ewan@xensource.com> wrote: > > > >On Fri, Sep 01, 2006 at 05:36:00PM -0400, sanjay kushwaha wrote: > > > >> Folks, > >> I am experiencing that live migration is not working in latest > >> xen-unstable. I get the following message during migration > >> > >> [root@pc5 ksanjay]# xm migrate --live 1 [1]199.77.138.23 > >> Error: /usr/lib/xen/bin/xc_save 18 1 0 0 1 failed > >> [root@pc5 ksanjay]# > >> > >> I traced the problem to a function in xen named set_sh_allocation() in > >> file xen/arch/x86/mm/shadow/common.c > >> > >> tools/libxc/xc_linux_save.c:xc_linux_save() is called from the python > >> script which makes the following hypercall > >> > >> if (live) { > >> if (xc_shadow_control(xc_handle, dom, > >> XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY, > >> NULL, 0, NULL, 0, NULL) < 0) { > >> ERR("Couldn''t enable shadow mode"); > >> goto out; > >> } > >> last_iter = 0; > >> } else { > >> ----------- > >> > >> this particular hypercall leads to the call of set_sh_allocation which > >> fails in the following code > >> > >> if ( d-> arch.shadow.total_pages < pages ) > >> { > >> /* Need to allocate more memory from domheap */ > >> pg = alloc_domheap_pages(NULL, SHADOW_MAX_ORDER, 0); > >> if ( pg == NULL ) > >> { > >> SHADOW_PRINTK("failed to allocate shadow pages.\n"); > >> return -ENOMEM; > >> } > >> > >> alloc_domheap_pages fails and returns NULL. however I think I have > >enough > >> memory available so this function should not fail. > >> > >> Is there anybody else experiencing the same problem? Could someone > >please > >> tell me how to fix it? > > > >I''ve put some changes into xen-unstable today which might help. The last > >fix > >is on its way through testing now. Look out for xen-unstable changeset > >11422, and try that, see how you get on. > > > >Cheers, > > > >Ewan. > > > > > > -- > ---------------------- > PhD Student, Georgia Tech > http://www.cc.gatech.edu/~ksanjay/ <http://www.cc.gatech.edu/%7Eksanjay/>> _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
sanjay kushwaha
2006-Sep-08 13:48 UTC
Re: [Xen-devel] Live vm migration broken in latest xen-unstable
Hi Tim,
Yes. This patch solved my problem. now live VM migration happens but I
am experiencing another problem. when I reattach to the migrated VM
console on the destination machine, I see the following traceback in
the dmesg of the guest VM.
------------[ cut here ]------------
kernel BUG at drivers/xen/netfront/netfront.c:717!
invalid opcode: 0000 [#1]
SMP
Modules linked in:
CPU: 0
EIP: 0061:[<c02598e0>] Not tainted VLI
EFLAGS: 00010082 (2.6.16.13-xenU #15)
EIP is at network_alloc_rx_buffers+0x470/0x4b0
eax: c056fc80 ebx: ccbc0d80 ecx: d1001040 edx: c05d0000
esi: c05d02a0 edi: c05d034c ebp: c0551f14 esp: c0551eac
ds: 007b es: 007b ss: 0069
Process xenwatch (pid: 8, threadinfo=c0550000 task=c057a540)
Stack: <0>00000208 00000000 0000cbc1 00000000 c05d034c c05d31b8
c05d0000 00000000
00000337 00000337 0000002f 00000208 ccbc1000 000000d1 cdd25838 00000100
000000d1 00000011 c05d0000 00000001 c02f8013 c05d0000 00000001 c05d02a0
Call Trace:
[<c01058ed>] show_stack_log_lvl+0xcd/0x120
[<c0105aeb>] show_registers+0x1ab/0x240
[<c0105e11>] die+0x111/0x240
[<c0106178>] do_trap+0x98/0xe0
[<c0106491>] do_invalid_op+0xa1/0xb0
[<c01052c7>] error_code+0x2b/0x30
[<c025a554>] backend_changed+0x1a4/0x250
[<c025517e>] otherend_changed+0x7e/0x90
[<c0253361>] xenwatch_handle_callback+0x21/0x60
[<c025345d>] xenwatch_thread+0xbd/0x160
[<c0132b0c>] kthread+0xec/0xf0
[<c0102c45>] kernel_thread_helper+0x5/0x10
Code: 82 04 15 00 00 8d 9e f8 14 00 00 e8 db 78 ea ff 8b 5d d8 39 9a
fc 14 00 00 0f 84 16 fe ff ff e9 51 ff ff ff 8d b4 26 00 00 00 00 <0f>
0b cd 02 2c a0 2f c0 e9 cd fc ff ff 0f 0b d1 02 2c a0 2f c0
[root@localhost ~]#
Does anyone know if this is a known problem?
Thanks for your help.
Sanjay
On 9/7/06, Tim Deegan <Tim.Deegan@xensource.com>
wrote:> Hi Sanjay,
>
> Does the attached patch fix this problem for you? It tries to make sure
> there is enough spare memory to enable shadow pagetables on the domain
> before starting the migration.
>
> Cheers,
>
> Tim
>
> At 15:09 -0400 on 05 Sep (1157468999), sanjay kushwaha wrote:
> > Hi Ewan,
> > I did a "hg pull -u" on my tree which also got the changeset
11422. but I am
> > still facing the same problem. btw this changeset seems to be specific
to
> > hvm domain while I am facing this problem with paravirtualized domain.
> >
> > Thanks,
> > Sanjay
> >
> > On 9/5/06, Ewan Mellor <ewan@xensource.com> wrote:
> > >
> > >On Fri, Sep 01, 2006 at 05:36:00PM -0400, sanjay kushwaha wrote:
> > >
> > >> Folks,
> > >> I am experiencing that live migration is not working in
latest
> > >> xen-unstable. I get the following message during migration
> > >>
> > >> [root@pc5 ksanjay]# xm migrate --live 1 [1]199.77.138.23
> > >> Error: /usr/lib/xen/bin/xc_save 18 1 0 0 1 failed
> > >> [root@pc5 ksanjay]#
> > >>
> > >> I traced the problem to a function in xen named
set_sh_allocation() in
> > >> file xen/arch/x86/mm/shadow/common.c
> > >>
> > >> tools/libxc/xc_linux_save.c:xc_linux_save() is called from
the python
> > >> script which makes the following hypercall
> > >>
> > >> if (live) {
> > >> if (xc_shadow_control(xc_handle, dom,
> > >>
XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
> > >> NULL, 0, NULL, 0, NULL) < 0)
{
> > >> ERR("Couldn''t enable shadow
mode");
> > >> goto out;
> > >> }
> > >> last_iter = 0;
> > >> } else {
> > >> -----------
> > >>
> > >> this particular hypercall leads to the call of
set_sh_allocation which
> > >> fails in the following code
> > >>
> > >> if ( d-> arch.shadow.total_pages < pages )
> > >> {
> > >> /* Need to allocate more memory from domheap */
> > >> pg = alloc_domheap_pages(NULL, SHADOW_MAX_ORDER,
0);
> > >> if ( pg == NULL )
> > >> {
> > >> SHADOW_PRINTK("failed to allocate shadow
pages.\n");
> > >> return -ENOMEM;
> > >> }
> > >>
> > >> alloc_domheap_pages fails and returns NULL. however I think I
have
> > >enough
> > >> memory available so this function should not fail.
> > >>
> > >> Is there anybody else experiencing the same problem? Could
someone
> > >please
> > >> tell me how to fix it?
> > >
> > >I''ve put some changes into xen-unstable today which might
help. The last
> > >fix
> > >is on its way through testing now. Look out for xen-unstable
changeset
> > >11422, and try that, see how you get on.
> > >
> > >Cheers,
> > >
> > >Ewan.
> > >
> >
> >
> >
> > --
> > ----------------------
> > PhD Student, Georgia Tech
> > http://www.cc.gatech.edu/~ksanjay/
<http://www.cc.gatech.edu/%7Eksanjay/>
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
>
>
>
>
--
----------------------
PhD Student, Georgia Tech
http://www.cc.gatech.edu/~ksanjay/
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
John Byrne
2006-Sep-08 20:09 UTC
[Xen-devel] Re: Live vm migration broken in latest xen-unstable
I have live migration working fine on x86-64 with changeset 11433; however, the reason I thought it was broken is that it currently seems to need a lot more memory. Using SuSE''s latest xenpreview bits I could migrate a 3GB guest between two machines (6GB and 4GB). With the new bits, a 2GB guest could be migrated off the 6GB machine, but the 4GB machine could not migrate it back. I currently have the VM set to 1.5GB and it ping-pongs fine. Any idea why so much extra memory seems to be required? John Byrne> sanjay kushwaha wrote: > Hi Tim, > Yes. This patch solved my problem. now live VM migration happens but I > am experiencing another problem. when I reattach to the migrated VM > console on the destination machine, I see the following traceback in > the dmesg of the guest VM. >> ...snipped... _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
sanjay kushwaha
2006-Sep-09 22:16 UTC
Re: [Xen-devel] Live vm migration broken in latest xen-unstable
Hi Folks, I am facing another problem. after the vm migration, the frontend of guest VM doesn''t properly attaches with the new backend. I observed that after VM migration, the disk requests sent by blkfront driver are not seen by the blkif_schedule process in the blkback driver. Do you think it is a known problem? has anybody else observed this? Thanks, Sanjay On 9/8/06, sanjay kushwaha <sanjay.kushwaha@gmail.com> wrote:> > Hi Tim, > Yes. This patch solved my problem. now live VM migration happens but I > am experiencing another problem. when I reattach to the migrated VM > console on the destination machine, I see the following traceback in > the dmesg of the guest VM. > > ------------[ cut here ]------------ > kernel BUG at drivers/xen/netfront/netfront.c:717! > invalid opcode: 0000 [#1] > SMP > Modules linked in: > CPU: 0 > EIP: 0061:[<c02598e0>] Not tainted VLI > EFLAGS: 00010082 (2.6.16.13-xenU #15) > EIP is at network_alloc_rx_buffers+0x470/0x4b0 > eax: c056fc80 ebx: ccbc0d80 ecx: d1001040 edx: c05d0000 > esi: c05d02a0 edi: c05d034c ebp: c0551f14 esp: c0551eac > ds: 007b es: 007b ss: 0069 > Process xenwatch (pid: 8, threadinfo=c0550000 task=c057a540) > Stack: <0>00000208 00000000 0000cbc1 00000000 c05d034c c05d31b8 > c05d0000 00000000 > 00000337 00000337 0000002f 00000208 ccbc1000 000000d1 cdd25838 > 00000100 > 000000d1 00000011 c05d0000 00000001 c02f8013 c05d0000 00000001 > c05d02a0 > Call Trace: > [<c01058ed>] show_stack_log_lvl+0xcd/0x120 > [<c0105aeb>] show_registers+0x1ab/0x240 > [<c0105e11>] die+0x111/0x240 > [<c0106178>] do_trap+0x98/0xe0 > [<c0106491>] do_invalid_op+0xa1/0xb0 > [<c01052c7>] error_code+0x2b/0x30 > [<c025a554>] backend_changed+0x1a4/0x250 > [<c025517e>] otherend_changed+0x7e/0x90 > [<c0253361>] xenwatch_handle_callback+0x21/0x60 > [<c025345d>] xenwatch_thread+0xbd/0x160 > [<c0132b0c>] kthread+0xec/0xf0 > [<c0102c45>] kernel_thread_helper+0x5/0x10 > Code: 82 04 15 00 00 8d 9e f8 14 00 00 e8 db 78 ea ff 8b 5d d8 39 9a > fc 14 00 00 0f 84 16 fe ff ff e9 51 ff ff ff 8d b4 26 00 00 00 00 <0f> > 0b cd 02 2c a0 2f c0 e9 cd fc ff ff 0f 0b d1 02 2c a0 2f c0 > > [root@localhost ~]# > > > Does anyone know if this is a known problem? > > Thanks for your help. > Sanjay > > On 9/7/06, Tim Deegan <Tim.Deegan@xensource.com> wrote: > > Hi Sanjay, > > > > Does the attached patch fix this problem for you? It tries to make sure > > there is enough spare memory to enable shadow pagetables on the domain > > before starting the migration. > > > > Cheers, > > > > Tim > > > > At 15:09 -0400 on 05 Sep (1157468999), sanjay kushwaha wrote: > > > Hi Ewan, > > > I did a "hg pull -u" on my tree which also got the changeset 11422. > but I am > > > still facing the same problem. btw this changeset seems to be specific > to > > > hvm domain while I am facing this problem with paravirtualized domain. > > > > > > Thanks, > > > Sanjay > > > > > > On 9/5/06, Ewan Mellor <ewan@xensource.com> wrote: > > > > > > > >On Fri, Sep 01, 2006 at 05:36:00PM -0400, sanjay kushwaha wrote: > > > > > > > >> Folks, > > > >> I am experiencing that live migration is not working in latest > > > >> xen-unstable. I get the following message during migration > > > >> > > > >> [root@pc5 ksanjay]# xm migrate --live 1 [1]199.77.138.23 > > > >> Error: /usr/lib/xen/bin/xc_save 18 1 0 0 1 failed > > > >> [root@pc5 ksanjay]# > > > >> > > > >> I traced the problem to a function in xen named set_sh_allocation() > in > > > >> file xen/arch/x86/mm/shadow/common.c > > > >> > > > >> tools/libxc/xc_linux_save.c:xc_linux_save() is called from the > python > > > >> script which makes the following hypercall > > > >> > > > >> if (live) { > > > >> if (xc_shadow_control(xc_handle, dom, > > > >> XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY, > > > >> NULL, 0, NULL, 0, NULL) < 0) { > > > >> ERR("Couldn''t enable shadow mode"); > > > >> goto out; > > > >> } > > > >> last_iter = 0; > > > >> } else { > > > >> ----------- > > > >> > > > >> this particular hypercall leads to the call of set_sh_allocation > which > > > >> fails in the following code > > > >> > > > >> if ( d-> arch.shadow.total_pages < pages ) > > > >> { > > > >> /* Need to allocate more memory from domheap */ > > > >> pg = alloc_domheap_pages(NULL, SHADOW_MAX_ORDER, 0); > > > >> if ( pg == NULL ) > > > >> { > > > >> SHADOW_PRINTK("failed to allocate shadow > pages.\n"); > > > >> return -ENOMEM; > > > >> } > > > >> > > > >> alloc_domheap_pages fails and returns NULL. however I think I have > > > >enough > > > >> memory available so this function should not fail. > > > >> > > > >> Is there anybody else experiencing the same problem? Could someone > > > >please > > > >> tell me how to fix it? > > > > > > > >I''ve put some changes into xen-unstable today which might help. The > last > > > >fix > > > >is on its way through testing now. Look out for xen-unstable > changeset > > > >11422, and try that, see how you get on. > > > > > > > >Cheers, > > > > > > > >Ewan. > > > > > > > > > > > > > > > > -- > > > ---------------------- > > > PhD Student, Georgia Tech > > > http://www.cc.gatech.edu/~ksanjay/ < > http://www.cc.gatech.edu/%7Eksanjay/> > > > > > _______________________________________________ > > > Xen-devel mailing list > > > Xen-devel@lists.xensource.com > > > http://lists.xensource.com/xen-devel > > > > > > > > > > > -- > ---------------------- > PhD Student, Georgia Tech > http://www.cc.gatech.edu/~ksanjay/ >-- ---------------------- PhD Student, Georgia Tech http://www.cc.gatech.edu/~ksanjay/ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel