I think I am having a similar problem as this: http://lists.xensource.com/archives/html/xen-users/2005-04/msg00251.html on xen 2.0.7 I can live migrate successfully 90% of the time, but I do hit this. Is there an update on the status of a fix? The last part of the xfer log: [1123789954.329159] Saving memory pages: iter 28 0% Saving memory pages: iter 28 0%^M 28: sent 34, skipped 0, ^M 28: sent 34, skipped 0, [DEBUG] Conn_sxpr> (xfr.err 22)[DEBUG] Conn_sxpr< err=0 Retry suspend domain (0) (repeated 350 times or so) Retry suspend domain (0) Unable to suspend domain. (0) Unable to suspend domain. (0) Domain appears not to have suspended: 0 Domain appears not to have suspended: 0 4155 [WRN] XFRD> Transfer errors: 4155 [WRN] XFRD> state=XFR_STATE err=1 4155 [INF] XFRD> Xfr service err=1 Thanks much for any help. Steven _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I think I am having a similar problem as this: > > http://lists.xensource.com/archives/html/xen-users/2005-04/msg > 00251.html > > on xen 2.0.7 > > I can live migrate successfully 90% of the time, but I do hit > this. Is there an update on the status of a fix?This is nasty -- the domain is probably crashing during the migrate. We have never seen this failure mode on 2.0.7, even after many thousands of migrates. Have you compiled your own kernel or are you using one of ours? (I presume 2.6.11 ?) What are you running in the guest? What does ''xm dmesg'' show? Ian> The last part of the xfer log: > > [1123789954.329159] Saving memory pages: iter 28 0% > Saving memory pages: iter 28 0%^M 28: sent 34, skipped 0, > ^M 28: sent 34, skipped 0, [DEBUG] Conn_sxpr> (xfr.err > 22)[DEBUG] Conn_sxpr< err=0 Retry suspend domain (0) > (repeated 350 times or so) Retry suspend domain (0) Unable to > suspend domain. (0) Unable to suspend domain. (0) Domain > appears not to have suspended: 0 Domain appears not to have > suspended: 0 > 4155 [WRN] XFRD> Transfer errors: > 4155 [WRN] XFRD> state=XFR_STATE err=1 > 4155 [INF] XFRD> Xfr service err=1 > > Thanks much for any help. > > Steven >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Please find my xm_dmesg.txt attached. Here is my xen config file: kernel ="/boot/vmlinuz-2.6.11-xenU" memory = 256 name = "rawhide" nics = 1 disk = [''phy:gnbd/gndb1xennode1root,sda1,w''] root = "/dev/sda1 ro" I downloaded 2.0.7 and did a make world; make install If you need anymore information, just let me know... Thanks much, Steven Ian Pratt wrote:> > > >>I think I am having a similar problem as this: >> >>http://lists.xensource.com/archives/html/xen-users/2005-04/msg >>00251.html >> >>on xen 2.0.7 >> >>I can live migrate successfully 90% of the time, but I do hit >>this. Is there an update on the status of a fix? >> >> > >This is nasty -- the domain is probably crashing during the migrate. We >have never seen this failure mode on 2.0.7, even after many thousands of >migrates. > >Have you compiled your own kernel or are you using one of ours? (I >presume 2.6.11 ?) > >What are you running in the guest? > >What does ''xm dmesg'' show? > >Ian > > > >>The last part of the xfer log: >> >>[1123789954.329159] Saving memory pages: iter 28 0% >>Saving memory pages: iter 28 0%^M 28: sent 34, skipped 0, >>^M 28: sent 34, skipped 0, [DEBUG] Conn_sxpr> (xfr.err >>22)[DEBUG] Conn_sxpr< err=0 Retry suspend domain (0) >>(repeated 350 times or so) Retry suspend domain (0) Unable to >>suspend domain. (0) Unable to suspend domain. (0) Domain >>appears not to have suspended: 0 Domain appears not to have >>suspended: 0 >>4155 [WRN] XFRD> Transfer errors: >>4155 [WRN] XFRD> state=XFR_STATE err=1 >>4155 [INF] XFRD> Xfr service err=1 >> >>Thanks much for any help. >> >>Steven >> >> >>_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I downloaded 2.0.7 and did a make world; make installPlease can you try using our binary installs. Also, are you using the 2.6.11 kernel or the 2.4? After the failed migrate, what does ''xm dmesg'' show on the source host. Ian> If you need anymore information, just let me know... > > Thanks much, > Steven > > Ian Pratt wrote: > > > > > > > > >>I think I am having a similar problem as this: > >> > >>http://lists.xensource.com/archives/html/xen-users/2005-04/msg > >>00251.html > >> > >>on xen 2.0.7 > >> > >>I can live migrate successfully 90% of the time, but I do > hit this. > >>Is there an update on the status of a fix? > >> > >> > > > >This is nasty -- the domain is probably crashing during the > migrate. We > >have never seen this failure mode on 2.0.7, even after many > thousands > >of migrates. > > > >Have you compiled your own kernel or are you using one of ours? (I > >presume 2.6.11 ?) > > > >What are you running in the guest? > > > >What does ''xm dmesg'' show? > > > >Ian > > > > > > > >>The last part of the xfer log: > >> > >>[1123789954.329159] Saving memory pages: iter 28 0% > >>Saving memory pages: iter 28 0%^M 28: sent 34, skipped 0, > >>^M 28: sent 34, skipped 0, [DEBUG] Conn_sxpr> (xfr.err 22)[DEBUG] > >>Conn_sxpr< err=0 Retry suspend domain (0) (repeated 350 > times or so) > >>Retry suspend domain (0) Unable to suspend domain. (0) Unable to > >>suspend domain. (0) Domain appears not to have suspended: 0 Domain > >>appears not to have > >>suspended: 0 > >>4155 [WRN] XFRD> Transfer errors: > >>4155 [WRN] XFRD> state=XFR_STATE err=1 > >>4155 [INF] XFRD> Xfr service err=1 > >> > >>Thanks much for any help. > >> > >>Steven > >> > >> > >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I am using the 2.6.11 kernel. I used the binary installs with the exception of ''xfrd'' which I had to build on my fedore core 4 boxes (since the binary install is linked with versions of libcurl, libssl, and libcrypto that aren''t there -- all other tools and kernels are from the binary install). I got the same problem when migrating. I was successfully 8 times, but on the 9th migrate it failed. Attached is the output of ''xm dmesg'' on the source host. I will keep everything as it is now if you need more information from the logs. Thanks, Steven Ian Pratt wrote:>>I downloaded 2.0.7 and did a make world; make install >> >> > >Please can you try using our binary installs. > >Also, are you using the 2.6.11 kernel or the 2.4? > >After the failed migrate, what does ''xm dmesg'' show on the source host. > >Ian > > > >>If you need anymore information, just let me know... >> >>Thanks much, >>Steven >> >>Ian Pratt wrote: >> >> >> >>> >>> >>> >>> >>>>I think I am having a similar problem as this: >>>> >>>>http://lists.xensource.com/archives/html/xen-users/2005-04/msg >>>>00251.html >>>> >>>>on xen 2.0.7 >>>> >>>>I can live migrate successfully 90% of the time, but I do >>>> >>>> >>hit this. >> >> >>>>Is there an update on the status of a fix? >>>> >>>> >>>> >>>> >>>This is nasty -- the domain is probably crashing during the >>> >>> >>migrate. We >> >> >>>have never seen this failure mode on 2.0.7, even after many >>> >>> >>thousands >> >> >>>of migrates. >>> >>>Have you compiled your own kernel or are you using one of ours? (I >>>presume 2.6.11 ?) >>> >>>What are you running in the guest? >>> >>>What does ''xm dmesg'' show? >>> >>>Ian >>> >>> >>> >>> >>> >>>>The last part of the xfer log: >>>> >>>>[1123789954.329159] Saving memory pages: iter 28 0% >>>>Saving memory pages: iter 28 0%^M 28: sent 34, skipped 0, >>>>^M 28: sent 34, skipped 0, [DEBUG] Conn_sxpr> (xfr.err 22)[DEBUG] >>>>Conn_sxpr< err=0 Retry suspend domain (0) (repeated 350 >>>> >>>> >>times or so) >> >> >>>>Retry suspend domain (0) Unable to suspend domain. (0) Unable to >>>>suspend domain. (0) Domain appears not to have suspended: 0 Domain >>>>appears not to have >>>>suspended: 0 >>>>4155 [WRN] XFRD> Transfer errors: >>>>4155 [WRN] XFRD> state=XFR_STATE err=1 >>>>4155 [INF] XFRD> Xfr service err=1 >>>> >>>>Thanks much for any help. >>>> >>>>Steven >>>> >>>> >>>> >>>> >>>> > >_______________________________________________ >Xen-users mailing list >Xen-users@lists.xensource.com >http://lists.xensource.com/xen-users > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''m having the exact same issue here with 2.0.7. I''m using RedHat AS 4 on one machine, Fedora Core 4 on the other using xen 2.0.7 built from source. The guest OS is RH AS 4. I''m also able to migrate a number of times in a row without a problem, but periodically get the exact same errors that Steven Yelton posted. If my guest OS is more or less idle it seems to be less likely to happen. But if I start a big compile job and trying to do the migrate it is more likely to fail. -- Ray _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
This is a snippet from the originating host''s xfrd.log when the failure happened. Meanwhile the other side''s xfrd.log looked good until it got an ''Error when reading from state file''. -- Ray ----- 1: sent 45431, skipped 3721, 1: sent 45431, skipped 3721, delta 16818ms, dom0 23%, target 9%, sent 88Mb/s, dirtied 8Mb/s 4557 pages [1124922400.612070] Saving memory pages: iter 2 0% Saving memory pages: iter 2 0% 22% 22% 47% 47% 71% 71% 2: sent 4054, skipped 500, 2: sent 4054, skipped 500, delta 1571ms, dom0 22%, target 21%, sent 84Mb/s, dirtied 64Mb/s 3116 pages [1124922402.183740] Saving memory pages: iter 3 0% Saving memory pages: iter 3 0% 31% 31% 73% 73% 3: sent 2693, skipped 420, 3: sent 2693, skipped 420, delta 1022ms, dom0 23%, target 19%, sent 86Mb/s, dirtied 65Mb/s 2053 pages [1124922403.205810] Saving memory pages: iter 4 0% Saving memory pages: iter 4 0% 48% 48% 4: sent 1874, skipped 176, 4: sent 1874, skipped 176, delta 705ms, dom0 23%, target 17%, sent 87Mb/s, dirtied 61Mb/s 1315 pages [1124922403.911777] Saving memory pages: iter 5 0% Saving memory pages: iter 5 0% 87% 87% 5: sent 1107, skipped 201, 5: sent 1107, skipped 201, delta 470ms, dom0 21%, target 40%, sent 77Mb/s, dirtied 128Mb/s 1846 pages [1124922404.382522] Saving memory pages: iter 6 0% Saving memory pages: iter 6 0% 63% 63% 6: sent 1491, skipped 349, 6: sent 1491, skipped 349, delta 609ms, dom0 22%, target 46%, sent 80Mb/s, dirtied 142Mb/s 2647 pages [1124922404.992153] Saving memory pages: iter 7 0% Saving memory pages: iter 7 0% 38% 38% 79% 79% 7: sent 2348, skipped 295, 7: sent 2348, skipped 295, delta 890ms, dom0 23%, target 18%, sent 86Mb/s, dirtied 102Mb/s 2797 pages [1124922405.882768] Saving memory pages: iter 8 0% Saving memory pages: iter 8 0% 38% 38% 84% 84% 8: sent 2409, skipped 384, 8: sent 2409, skipped 384, delta 860ms, dom0 24%, target 6%, sent 91Mb/s, dirtied 27Mb/s 713 pages [1124922406.742903] Saving memory pages: iter 9 0% Saving memory pages: iter 9 0% 9: sent 624, skipped 83, 9: sent 624, skipped 83, delta 230ms, dom0 23%, target 9%, sent 88Mb/s, dirtied 58Mb/s 410 pages [1124922406.973505] Saving memory pages: iter 10 0% Saving memory pages: iter 10 0% 10: sent 404, skipped 0, 10: sent 404, skipped 0, delta 142ms, dom0 26%, target 6%, sent 93Mb/s, dirtied 51Mb/s 223 pages [1124922407.118014] Saving memory pages: iter 11 0% Saving memory pages: iter 11 0% 11: sent 127, skipped 89, 11: sent 127, skipped 89, delta 47ms, dom0 29%, target 6%, sent 88Mb/s, dirtied 150Mb/s 216 pages [1124922407.163792] Saving memory pages: iter 12 0% Saving memory pages: iter 12 0% 12: sent 210, skipped 0, 12: sent 210, skipped 0, delta 78ms, dom0 25%, target 10%, sent 88Mb/s, dirtied 132Mb/s 315 pages [1124922407.242383] Saving memory pages: iter 13 0% Saving memory pages: iter 13 0% 13: sent 309, skipped 0, 13: sent 309, skipped 0, delta 113ms, dom0 25%, target 7%, sent 89Mb/s, dirtied 91Mb/s 317 pages [1124922407.355431] Saving memory pages: iter 14 0% Saving memory pages: iter 14 0% 14: sent 310, skipped 0, 14: sent 310, skipped 0, delta 113ms, dom0 25%, target 7%, sent 89Mb/s, dirtied 82Mb/s 283 pages [1124922407.468703] Saving memory pages: iter 15 0% Saving memory pages: iter 15 0% 15: sent 277, skipped 0, 15: sent 277, skipped 0, delta 99ms, dom0 26%, target 7%, sent 91Mb/s, dirtied 94Mb/s 287 pages [1124922407.568408] Saving memory pages: iter 16 0% Saving memory pages: iter 16 0% 16: sent 281, skipped 0, 16: sent 281, skipped 0, delta 102ms, dom0 26%, target 8%, sent 90Mb/s, dirtied 107Mb/s 334 pages [1124922407.671120] Saving memory pages: iter 17 0% Saving memory pages: iter 17 0% 17: sent 243, skipped 86, 17: sent 243, skipped 86, delta 93ms, dom0 25%, target 13%, sent 85Mb/s, dirtied 135Mb/s 385 pages [1124922407.764443] Saving memory pages: iter 18 0% Saving memory pages: iter 18 0% 18: sent 378, skipped 0, 18: sent 378, skipped 0, delta 144ms, dom0 24%, target 11%, sent 86Mb/s, dirtied 82Mb/s 363 pages [1124922407.908636] Saving memory pages: iter 19 0% Saving memory pages: iter 19 0% 19: sent 355, skipped 0, 19: sent 355, skipped 0, delta 130ms, dom0 26%, target 6%, sent 89Mb/s, dirtied 71Mb/s 283 pages [1124922408.038797] Saving memory pages: iter 20 0% Saving memory pages: iter 20 0% 20: sent 185, skipped 92, 20: sent 185, skipped 92, delta 106ms, dom0 17%, target 81%, sent 57Mb/s, dirtied 218Mb/s 707 pages [1124922408.145378] Saving memory pages: iter 21 0% Saving memory pages: iter 21 0% 21: sent 619, skipped 83, 21: sent 619, skipped 83, delta 229ms, dom0 23%, target 13%, sent 88Mb/s, dirtied 94Mb/s 657 pages [1124922408.375295] Saving memory pages: iter 22 0% Saving memory pages: iter 22 0% 22: sent 651, skipped 0, 22: sent 651, skipped 0, delta 231ms, dom0 25%, target 2%, sent 92Mb/s, dirtied 18Mb/s 130 pages [1124922408.606305] Saving memory pages: iter 23 0% Saving memory pages: iter 23 0% 23: sent 124, skipped 0, 23: sent 124, skipped 0, delta 45ms, dom0 31%, target 6%, sent 90Mb/s, dirtied 120Mb/s 166 pages [1124922408.651758] Saving memory pages: iter 24 0% Saving memory pages: iter 24 0% 24: sent 159, skipped 0, 24: sent 159, skipped 0, delta 57ms, dom0 28%, target 8%, sent 91Mb/s, dirtied 143Mb/s 249 pages [1124922408.709624] Saving memory pages: iter 25 0% Saving memory pages: iter 25 0% 25: sent 243, skipped 0, 25: sent 243, skipped 0, delta 102ms, dom0 21%, target 79%, sent 78Mb/s, dirtied 57Mb/s 178 pages [1124922408.812289] Saving memory pages: iter 26 0% Saving memory pages: iter 26 0% 26: sent 168, skipped 4, 26: sent 168, skipped 4, delta 70ms, dom0 21%, target 78%, sent 78Mb/s, dirtied 20Mb/s 43 pages [1124922408.883198] Saving memory pages: iter 27 0% Saving memory pages: iter 27 0% 27: sent 39, skipped 4, 27: sent 39, skipped 4, [DEBUG] Conn_sxpr> (xfr.err 22)[DEBUG] Conn_sxpr< err=0 Retry suspend domain (0) ...repeated many times... Retry suspend domain (0) Retry suspend domain (0) Unable to suspend domain. (0) Unable to suspend domain. (0) Domain appears not to have suspended: 0 Domain appears not to have suspended: 0 6638 [WRN] XFRD> Transfer errors: 6638 [WRN] XFRD> state=XFR_STATE err=1 6638 [INF] XFRD> Xfr service err=1 -- Ray> -----Original Message----- > From: Cole, Ray > Sent: Wednesday, August 24, 2005 11:32 AM > To: ''xen-users@lists.xensource.com'' > Subject: Re: [Xen-users] Live migration problem > > I''m having the exact same issue here with 2.0.7. I''m using RedHat AS 4 on one machine, Fedora Core 4 on the other using xen 2.0.7 built from source. The guest OS is RH AS 4. I''m also able to migrate a number of times in a row without a problem, but periodically get the exact same errors that Steven Yelton posted. If my guest OS is more or less idle it seems to be less likely to happen. But if I start a big compile job and trying to do the migrate it is more likely to fail. > > -- Ray_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
One thing I don''t understand is when I look at xc_linux_save.c''s suspend_and_state function - it appears it does: xcio_suspend_domain(ioctxt); retry: ... stuff tries to see if xcio_suspend_domain worked - is that correct? Should the xcio_suspend_domain() call be after the retry: label? Or does that xcio_suspend_domain call guarantee the message is delivered and the code under retry: is just waiting for the state to now change? Second, I see it tries 100 times with a 10,000microsecond sleep inbetween. So it only waits for 1 second for the domain to suspend. I realize 1 second is a really long time in terms of computing. But I''m wondering what all conditions must be true for it to be able to suspend a domain. Could there legitimately be times when it would take longer than a second for suspension? -- Ray> -----Original Message----- > From: Cole, Ray > Sent: Wednesday, August 24, 2005 5:33 PM > To: ''xen-users@lists.xensource.com'' > Subject: RE: Re: [Xen-users] Live migration problem > > This is a snippet from the originating host''s xfrd.log when the failure happened. Meanwhile the other side''s xfrd.log looked good until it got an ''Error when reading from state file''. > > > -- Ray > > > ----- > > > 1: sent 45431, skipped 3721, > > 1: sent 45431, skipped 3721, delta 16818ms, dom0 23%, target 9%, sent 88Mb/s, dirtied 8Mb/s 4557 pages > [1124922400.612070] Saving memory pages: iter 2 0% > Saving memory pages: iter 2 0% 22% > 22% 47% > 47% 71% > 71% > 2: sent 4054, skipped 500, > > 2: sent 4054, skipped 500, delta 1571ms, dom0 22%, target 21%, sent 84Mb/s, dirtied 64Mb/s 3116 pages > [1124922402.183740] Saving memory pages: iter 3 0% > Saving memory pages: iter 3 0% 31% > 31% 73% > 73% > 3: sent 2693, skipped 420, > > 3: sent 2693, skipped 420, delta 1022ms, dom0 23%, target 19%, sent 86Mb/s, dirtied 65Mb/s 2053 pages > [1124922403.205810] Saving memory pages: iter 4 0% > Saving memory pages: iter 4 0% 48% > 48% > 4: sent 1874, skipped 176, > > 4: sent 1874, skipped 176, delta 705ms, dom0 23%, target 17%, sent 87Mb/s, dirtied 61Mb/s 1315 pages > [1124922403.911777] Saving memory pages: iter 5 0% > Saving memory pages: iter 5 0% 87% > 87% > 5: sent 1107, skipped 201, > > 5: sent 1107, skipped 201, delta 470ms, dom0 21%, target 40%, sent 77Mb/s, dirtied 128Mb/s 1846 pages > [1124922404.382522] Saving memory pages: iter 6 0% > Saving memory pages: iter 6 0% 63% > 63% > 6: sent 1491, skipped 349, > > 6: sent 1491, skipped 349, delta 609ms, dom0 22%, target 46%, sent 80Mb/s, dirtied 142Mb/s 2647 pages > [1124922404.992153] Saving memory pages: iter 7 0% > Saving memory pages: iter 7 0% 38% > 38% 79% > 79% > 7: sent 2348, skipped 295, > > 7: sent 2348, skipped 295, delta 890ms, dom0 23%, target 18%, sent 86Mb/s, dirtied 102Mb/s 2797 pages > [1124922405.882768] Saving memory pages: iter 8 0% > Saving memory pages: iter 8 0% 38% > 38% 84% > 84% > 8: sent 2409, skipped 384, > > 8: sent 2409, skipped 384, delta 860ms, dom0 24%, target 6%, sent 91Mb/s, dirtied 27Mb/s 713 pages > [1124922406.742903] Saving memory pages: iter 9 0% > Saving memory pages: iter 9 0% > 9: sent 624, skipped 83, > > 9: sent 624, skipped 83, delta 230ms, dom0 23%, target 9%, sent 88Mb/s, dirtied 58Mb/s 410 pages > [1124922406.973505] Saving memory pages: iter 10 0% > Saving memory pages: iter 10 0% > 10: sent 404, skipped 0, > > 10: sent 404, skipped 0, delta 142ms, dom0 26%, target 6%, sent 93Mb/s, dirtied 51Mb/s 223 pages > [1124922407.118014] Saving memory pages: iter 11 0% > Saving memory pages: iter 11 0% > 11: sent 127, skipped 89, > > 11: sent 127, skipped 89, delta 47ms, dom0 29%, target 6%, sent 88Mb/s, dirtied 150Mb/s 216 pages > [1124922407.163792] Saving memory pages: iter 12 0%> > Saving memory pages: iter 12 0% > 12: sent 210, skipped 0, > > 12: sent 210, skipped 0, delta 78ms, dom0 25%, target 10%, sent 88Mb/s, dirtied 132Mb/s 315 pages > [1124922407.242383] Saving memory pages: iter 13 0% > Saving memory pages: iter 13 0% > 13: sent 309, skipped 0, > > 13: sent 309, skipped 0, delta 113ms, dom0 25%, target 7%, sent 89Mb/s, dirtied 91Mb/s 317 pages > [1124922407.355431] Saving memory pages: iter 14 0% > Saving memory pages: iter 14 0% > 14: sent 310, skipped 0, > > 14: sent 310, skipped 0, delta 113ms, dom0 25%, target 7%, sent 89Mb/s, dirtied 82Mb/s 283 pages > [1124922407.468703] Saving memory pages: iter 15 0% > Saving memory pages: iter 15 0% > 15: sent 277, skipped 0, > > 15: sent 277, skipped 0, delta 99ms, dom0 26%, target 7%, sent 91Mb/s, dirtied 94Mb/s 287 pages > [1124922407.568408] Saving memory pages: iter 16 0% > Saving memory pages: iter 16 0% > 16: sent 281, skipped 0, > > 16: sent 281, skipped 0, delta 102ms, dom0 26%, target 8%, sent 90Mb/s, dirtied 107Mb/s 334 pages > [1124922407.671120] Saving memory pages: iter 17 0% > Saving memory pages: iter 17 0% > 17: sent 243, skipped 86, > > 17: sent 243, skipped 86, delta 93ms, dom0 25%, target 13%, sent 85Mb/s, dirtied 135Mb/s 385 pages > [1124922407.764443] Saving memory pages: iter 18 0% > Saving memory pages: iter 18 0% > 18: sent 378, skipped 0, > > 18: sent 378, skipped 0, delta 144ms, dom0 24%, target 11%, sent 86Mb/s, dirtied 82Mb/s 363 pages > [1124922407.908636] Saving memory pages: iter 19 0% > Saving memory pages: iter 19 0% > 19: sent 355, skipped 0, > > 19: sent 355, skipped 0, delta 130ms, dom0 26%, target 6%, sent 89Mb/s, dirtied 71Mb/s 283 pages > [1124922408.038797] Saving memory pages: iter 20 0% > Saving memory pages: iter 20 0% > 20: sent 185, skipped 92, > > 20: sent 185, skipped 92, delta 106ms, dom0 17%, target 81%, sent 57Mb/s, dirtied 218Mb/s 707 pages > [1124922408.145378] Saving memory pages: iter 21 0% > Saving memory pages: iter 21 0% > 21: sent 619, skipped 83, > > 21: sent 619, skipped 83, delta 229ms, dom0 23%, target 13%, sent 88Mb/s, dirtied 94Mb/s 657 pages > [1124922408.375295] Saving memory pages: iter 22 0% > Saving memory pages: iter 22 0% > 22: sent 651, skipped 0, > > 22: sent 651, skipped 0, delta 231ms, dom0 25%, target 2%, sent 92Mb/s, dirtied 18Mb/s 130 pages > [1124922408.606305] Saving memory pages: iter 23 0% > Saving memory pages: iter 23 0% > 23: sent 124, skipped 0, > > 23: sent 124, skipped 0, delta 45ms, dom0 31%, target 6%, sent 90Mb/s, dirtied 120Mb/s 166 pages > [1124922408.651758] Saving memory pages: iter 24 0% > Saving memory pages: iter 24 0% > 24: sent 159, skipped 0, > > 24: sent 159, skipped 0, delta 57ms, dom0 28%, target 8%, sent 91Mb/s, dirtied 143Mb/s 249 pages > [1124922408.709624] Saving memory pages: iter 25 0% > Saving memory pages: iter 25 0% > 25: sent 243, skipped 0, > > 25: sent 243, skipped 0, delta 102ms, dom0 21%, target 79%, sent 78Mb/s, dirtied 57Mb/s 178 pages > [1124922408.812289] Saving memory pages: iter 26 0% > Saving memory pages: iter 26 0% > 26: sent 168, skipped 4, > > 26: sent 168, skipped 4, delta 70ms, dom0 21%, target 78%, sent 78Mb/s, dirtied 20Mb/s 43 pages > [1124922408.883198] Saving memory pages: iter 27 0% > Saving memory pages: iter 27 0% > 27: sent 39, skipped 4, > > 27: sent 39, skipped 4, [DEBUG] Conn_sxpr> > (xfr.err 22)[DEBUG] Conn_sxpr< err=0 > Retry suspend domain (0) > ...repeated many times... > Retry suspend domain (0) > Retry suspend domain (0) > Unable to suspend domain. (0) > Unable to suspend domain. (0) > Domain appears not to have suspended: 0 > Domain appears not to have suspended: 0 > 6638 [WRN] XFRD> Transfer errors: > 6638 [WRN] XFRD> state=XFR_STATE err=1 > 6638 [INF] XFRD> Xfr service err=1 > > -- Ray > > -----Original Message-----> > From: Cole, Ray > Sent: Wednesday, August 24, 2005 11:32 AM > To: ''xen-users@lists.xensource.com'' > Subject: Re: [Xen-users] Live migration problem > > I''m having the exact same issue here with 2.0.7. I''m using RedHat AS 4 on one machine, Fedora Core 4 on the other using xen 2.0.7 built from source. The guest OS is RH AS 4. I''m also able to migrate a number of times in a row without a problem, but periodically get the exact same errors that Steven Yelton posted. If my guest OS is more or less idle it seems to be less likely to happen. But if I start a big compile job and trying to do the migrate it is more likely to fail. > > -- Ray_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>One thing I don''t understand is when I look at xc_linux_save.c''s >suspend_and_state function - it appears it does: > > xcio_suspend_domain(ioctxt); > >retry: >=20 > ... stuff tries to see if xcio_suspend_domain worked - is that >correct? > >Should the xcio_suspend_domain() call be after the retry: label? Or >does that xcio_suspend_domain call guarantee the message is delivered >and the code under retry: is just waiting for the state to now change?The latter - although there was a bug earlier whereby even though the message (a ''xfr_vm_suspend'' message to xend) was correctly delivered, it could get ignored by xend. This is fixed in 2.0-testing; the patch is small though and so you could just try it on a 2.0 tree if you don''t want to upgrade your kernels.>Second, I see it tries 100 times with a 10,000microsecond sleep >inbetween. So it only waits for 1 second for the domain to suspend. I >realize 1 second is a really long time in terms of computing. But I''m >wondering what all conditions must be true for it to be able to suspend >a domain. Could there legitimately be times when it would take longer >than a second for suspension?So the idea here is that the domain itself may want to do various things before it''s ready to suspend - think of an ACPI ''suspend'' hook which allows device drivers etc to get themselves prepared to power down rather than just yanking the plug. Of course since the guest may not cooperate etc you need to timeout on this phase. This is what''s going on here. cheers, S. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I intsalled 2.0-testing. The 2.6.12-xen0/U kernel in 2.0-testing that I pulled this morning seems to panic right up front so I went back to the 2.6.11.12-xen0/U kernel. This combination still causes the suspend to fail with the same ''Retry suspend domain'' calls. What I''m running in the domain will be doing a ton of NFS access (doing a build that uses all networked resources). Not sure if that would come into play or not. I''ll try making it do something else that will be less NFS intensive to see if that works better or not. -- Ray -----Original Message----- From: Steven Hand [mailto:Steven.Hand@cl.cam.ac.uk] Sent: Wednesday, August 24, 2005 7:41 PM To: Cole, Ray Cc: xen-users@lists.xensource.com; Steven.Hand@cl.cam.ac.uk Subject: Re: [Xen-users] Live migration problem The latter - although there was a bug earlier whereby even though the message (a ''xfr_vm_suspend'' message to xend) was correctly delivered, it could get ignored by xend. This is fixed in 2.0-testing; the patch is small though and so you could just try it on a 2.0 tree if you don''t want to upgrade your kernels. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Same thing happens when I have a process running that is not NFS-intensive. Is there some additional debugging information I can enable to provide more information? -- Ray -----Original Message----- From: Cole, Ray Sent: Thursday, August 25, 2005 10:48 AM To: ''Steven Hand'' Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem I intsalled 2.0-testing. The 2.6.12-xen0/U kernel in 2.0-testing that I pulled this morning seems to panic right up front so I went back to the 2.6.11.12-xen0/U kernel. This combination still causes the suspend to fail with the same ''Retry suspend domain'' calls. What I''m running in the domain will be doing a ton of NFS access (doing a build that uses all networked resources). Not sure if that would come into play or not. I''ll try making it do something else that will be less NFS intensive to see if that works better or not. -- Ray -----Original Message----- From: Steven Hand [mailto:Steven.Hand@cl.cam.ac.uk] Sent: Wednesday, August 24, 2005 7:41 PM To: Cole, Ray Cc: xen-users@lists.xensource.com; Steven.Hand@cl.cam.ac.uk Subject: Re: [Xen-users] Live migration problem The latter - although there was a bug earlier whereby even though the message (a ''xfr_vm_suspend'' message to xend) was correctly delivered, it could get ignored by xend. This is fixed in 2.0-testing; the patch is small though and so you could just try it on a 2.0 tree if you don''t want to upgrade your kernels. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Here is a little more information - not sure if it is related or not...but...on Domain 0 I am getting an Ooops...it does not coincide with the problem below, but it seems to indicate something isn''t going right :-) This is with the 2.6.11.12-Xen0 kernel for Domain 0. It makes me wonder if something similar is happening in my guest OS 2.6.11.12-XenU kernel as well? VFS: Busy inodes after unmount. Self-destruct in 5 seconds. Have a nice day... Unable to handle kernel NULL pointer dereference at virtual address 0000003c printing eip: c016c304 *pde = ma 00000000 pa 55555000 [<c016d3f6>] generic_forget_inode+0x14a/0x16e [<c016a83c>] prune_dcache+0x1eb/0x226 [<c016acf9>] shrink_dcache_memory+0x1f/0x45 [<c013f4bc>] shrink_slab+0x10c/0x16f [<c0140aed>] balance_pgdat+0x265/0x3ac [<c0140cea>] kswapd+0xb6/0xe5 [<c012e048>] autoremove_wake_function+0x0/0x4b [<c0108be6>] ret_from_fork+0x6/0x1c [<c012e048>] autoremove_wake_function+0x0/0x4b [<c0140c34>] kswapd+0x0/0xe5 [<c0106eb1>] kernel_thread_helper+0x5/0xb Oops: 0000 [#1] PREEMPT Modules linked in: agpgart CPU: 0 EIP: 0061:[<c016c304>] Not tainted VLI EFLAGS: 00011286 (2.6.11.12-xen0) EIP is at clear_inode+0x4c/0xbd eax: 00000000 ebx: c16eebc4 ecx: 00000000 edx: c16eebc4 esi: c16eecd0 edi: c31fae00 ebp: c0574000 esp: c0575e9c ds: 007b es: 007b ss: 0069 Process kswapd0 (pid: 110, threadinfo=c0574000 task=c0565a00) Stack: c16eebc4 c16eebc4 c0574000 c016d3f6 c16eebc4 00000000 00000000 c16efd9c c16eebc4 0000007b c016a83c c16eebc4 c16eebc4 c0574000 00000000 00000083 00000000 c109fa00 c016acf9 00000080 c013f4bc 00000080 000000d0 00002e0e Call Trace: [<c016d3f6>] generic_forget_inode+0x14a/0x16e [<c016a83c>] prune_dcache+0x1eb/0x226 [<c016acf9>] shrink_dcache_memory+0x1f/0x45 [<c013f4bc>] shrink_slab+0x10c/0x16f [<c0140aed>] balance_pgdat+0x265/0x3ac [<c0140cea>] kswapd+0xb6/0xe5 [<c012e048>] autoremove_wake_function+0x0/0x4b [<c0108be6>] ret_from_fork+0x6/0x1c [<c012e048>] autoremove_wake_function+0x0/0x4b [<c0140c34>] kswapd+0x0/0xe5 [<c0106eb1>] kernel_thread_helper+0x5/0xb Code: 00 00 a8 10 75 02 0f 0b a8 20 74 02 0f 0b 8b 83 0c 01 00 00 8d b3 0c 01 00 00 a8 08 75 38 8b 83 94 00 00 00 85 c0 74 0a 8b 40 24 <8b> 50 3c 85 d2 75 60 8b 83 f4 00 00 00 85 c0 75 4c 8b b3 f8 00 <6>device vif3.0 entered promiscuous mode -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com]On Behalf Of Cole, Ray Sent: Thursday, August 25, 2005 11:05 AM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem Same thing happens when I have a process running that is not NFS-intensive. Is there some additional debugging information I can enable to provide more information? -- Ray -----Original Message----- From: Cole, Ray Sent: Thursday, August 25, 2005 10:48 AM To: ''Steven Hand'' Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem I intsalled 2.0-testing. The 2.6.12-xen0/U kernel in 2.0-testing that I pulled this morning seems to panic right up front so I went back to the 2.6.11.12-xen0/U kernel. This combination still causes the suspend to fail with the same ''Retry suspend domain'' calls. What I''m running in the domain will be doing a ton of NFS access (doing a build that uses all networked resources). Not sure if that would come into play or not. I''ll try making it do something else that will be less NFS intensive to see if that works better or not. -- Ray -----Original Message----- From: Steven Hand [mailto:Steven.Hand@cl.cam.ac.uk] Sent: Wednesday, August 24, 2005 7:41 PM To: Cole, Ray Cc: xen-users@lists.xensource.com; Steven.Hand@cl.cam.ac.uk Subject: Re: [Xen-users] Live migration problem The latter - although there was a bug earlier whereby even though the message (a ''xfr_vm_suspend'' message to xend) was correctly delivered, it could get ignored by xend. This is fixed in 2.0-testing; the patch is small though and so you could just try it on a 2.0 tree if you don''t want to upgrade your kernels. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I decided to put in some printk''s into reboot.c''s __do_suspend. During a "good" live migration run I see the printk''s show up on the console. In the bad one I see that __do_suspend never gets called :-( I''ll continue to follow it up the chain to see if it never gets the message to suspend at all or if something is going bad between getting the message and suspending. I''m running xen-2.0-testing with the xen-2.0 2.6.11.12-xenU kernel BTW. -- Ray -----Original Message----- From: Cole, Ray Sent: Thursday, August 25, 2005 3:52 PM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem Here is a little more information - not sure if it is related or not...but...on Domain 0 I am getting an Ooops...it does not coincide with the problem below, but it seems to indicate something isn''t going right :-) This is with the 2.6.11.12-Xen0 kernel for Domain 0. It makes me wonder if something similar is happening in my guest OS 2.6.11.12-XenU kernel as well? VFS: Busy inodes after unmount. Self-destruct in 5 seconds. Have a nice day... Unable to handle kernel NULL pointer dereference at virtual address 0000003c printing eip: c016c304 *pde = ma 00000000 pa 55555000 [<c016d3f6>] generic_forget_inode+0x14a/0x16e [<c016a83c>] prune_dcache+0x1eb/0x226 [<c016acf9>] shrink_dcache_memory+0x1f/0x45 [<c013f4bc>] shrink_slab+0x10c/0x16f [<c0140aed>] balance_pgdat+0x265/0x3ac [<c0140cea>] kswapd+0xb6/0xe5 [<c012e048>] autoremove_wake_function+0x0/0x4b [<c0108be6>] ret_from_fork+0x6/0x1c [<c012e048>] autoremove_wake_function+0x0/0x4b [<c0140c34>] kswapd+0x0/0xe5 [<c0106eb1>] kernel_thread_helper+0x5/0xb Oops: 0000 [#1] PREEMPT Modules linked in: agpgart CPU: 0 EIP: 0061:[<c016c304>] Not tainted VLI EFLAGS: 00011286 (2.6.11.12-xen0) EIP is at clear_inode+0x4c/0xbd eax: 00000000 ebx: c16eebc4 ecx: 00000000 edx: c16eebc4 esi: c16eecd0 edi: c31fae00 ebp: c0574000 esp: c0575e9c ds: 007b es: 007b ss: 0069 Process kswapd0 (pid: 110, threadinfo=c0574000 task=c0565a00) Stack: c16eebc4 c16eebc4 c0574000 c016d3f6 c16eebc4 00000000 00000000 c16efd9c c16eebc4 0000007b c016a83c c16eebc4 c16eebc4 c0574000 00000000 00000083 00000000 c109fa00 c016acf9 00000080 c013f4bc 00000080 000000d0 00002e0e Call Trace: [<c016d3f6>] generic_forget_inode+0x14a/0x16e [<c016a83c>] prune_dcache+0x1eb/0x226 [<c016acf9>] shrink_dcache_memory+0x1f/0x45 [<c013f4bc>] shrink_slab+0x10c/0x16f [<c0140aed>] balance_pgdat+0x265/0x3ac [<c0140cea>] kswapd+0xb6/0xe5 [<c012e048>] autoremove_wake_function+0x0/0x4b [<c0108be6>] ret_from_fork+0x6/0x1c [<c012e048>] autoremove_wake_function+0x0/0x4b [<c0140c34>] kswapd+0x0/0xe5 [<c0106eb1>] kernel_thread_helper+0x5/0xb Code: 00 00 a8 10 75 02 0f 0b a8 20 74 02 0f 0b 8b 83 0c 01 00 00 8d b3 0c 01 00 00 a8 08 75 38 8b 83 94 00 00 00 85 c0 74 0a 8b 40 24 <8b> 50 3c 85 d2 75 60 8b 83 f4 00 00 00 85 c0 75 4c 8b b3 f8 00 <6>device vif3.0 entered promiscuous mode -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com]On Behalf Of Cole, Ray Sent: Thursday, August 25, 2005 11:05 AM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem Same thing happens when I have a process running that is not NFS-intensive. Is there some additional debugging information I can enable to provide more information? -- Ray -----Original Message----- From: Cole, Ray Sent: Thursday, August 25, 2005 10:48 AM To: ''Steven Hand'' Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem I intsalled 2.0-testing. The 2.6.12-xen0/U kernel in 2.0-testing that I pulled this morning seems to panic right up front so I went back to the 2.6.11.12-xen0/U kernel. This combination still causes the suspend to fail with the same ''Retry suspend domain'' calls. What I''m running in the domain will be doing a ton of NFS access (doing a build that uses all networked resources). Not sure if that would come into play or not. I''ll try making it do something else that will be less NFS intensive to see if that works better or not. -- Ray -----Original Message----- From: Steven Hand [mailto:Steven.Hand@cl.cam.ac.uk] Sent: Wednesday, August 24, 2005 7:41 PM To: Cole, Ray Cc: xen-users@lists.xensource.com; Steven.Hand@cl.cam.ac.uk Subject: Re: [Xen-users] Live migration problem The latter - although there was a bug earlier whereby even though the message (a ''xfr_vm_suspend'' message to xend) was correctly delivered, it could get ignored by xend. This is fixed in 2.0-testing; the patch is small though and so you could just try it on a 2.0 tree if you don''t want to upgrade your kernels. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Looks like the suspend message is received in the shutdown handler. schedule_work is called to schedule the work but, sporadically, that work is never executed. It is as if schedule_work doesn''t really schedule it or it is unable to get executed. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com]On Behalf Of Cole, Ray Sent: Wednesday, August 31, 2005 12:41 PM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem I decided to put in some printk''s into reboot.c''s __do_suspend. During a "good" live migration run I see the printk''s show up on the console. In the bad one I see that __do_suspend never gets called :-( I''ll continue to follow it up the chain to see if it never gets the message to suspend at all or if something is going bad between getting the message and suspending. I''m running xen-2.0-testing with the xen-2.0 2.6.11.12-xenU kernel BTW. -- Ray _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I think I have it fixed, but I''m not sure why :-) I modified reboot.c''s shutdown_handler routine to NOT call ctrl_if_send_response(). This appears to make live migration rock solid on my machines. It appears to me that if the xenU kernel attempts to give a response to the suspend command that it runs the possibility of locking up. I have very little knowledge about the Xen code and such, but it seems to me that if it works when the response is removed then nobody must be expecting a response on the other end of the conversation or a response is already being sent from somewhere else. I realize commenting this out would then cause a response to not be sent for SYSRQ commands and such so this is my no means a proper ''fix'', but I think the root cause of the problem I''ve been having with live migration periodically giving me errors that it cannot suspend has perhaps been found. I''ve not performed a live migration about 14 times now without it failing with this change in place. Is this enough information for someone to figure out what the real cure should be? I''m starting to think that shutdown_handler should not call ctrl_if_send_response if it is a suspend request and no previous suspend request was pending, else call ctrl_if_send_response. But I''d just be guessing. -- Ray -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com]On Behalf Of Cole, Ray Sent: Wednesday, August 31, 2005 3:25 PM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem Looks like the suspend message is received in the shutdown handler. schedule_work is called to schedule the work but, sporadically, that work is never executed. It is as if schedule_work doesn''t really schedule it or it is unable to get executed. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com]On Behalf Of Cole, Ray Sent: Wednesday, August 31, 2005 12:41 PM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem I decided to put in some printk''s into reboot.c''s __do_suspend. During a "good" live migration run I see the printk''s show up on the console. In the bad one I see that __do_suspend never gets called :-( I''ll continue to follow it up the chain to see if it never gets the message to suspend at all or if something is going bad between getting the message and suspending. I''m running xen-2.0-testing with the xen-2.0 2.6.11.12-xenU kernel BTW. -- Ray _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Spoke too soon...failed after about the 20th or so migration. But it is more stable than it was... -- Ray -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com]On Behalf Of Cole, Ray Sent: Wednesday, August 31, 2005 4:14 PM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem I think I have it fixed, but I''m not sure why :-) I modified reboot.c''s shutdown_handler routine to NOT call ctrl_if_send_response(). This appears to make live migration rock solid on my machines. It appears to me that if the xenU kernel attempts to give a response to the suspend command that it runs the possibility of locking up. I have very little knowledge about the Xen code and such, but it seems to me that if it works when the response is removed then nobody must be expecting a response on the other end of the conversation or a response is already being sent from somewhere else. I realize commenting this out would then cause a response to not be sent for SYSRQ commands and such so this is my no means a proper ''fix'', but I think the root cause of the problem I''ve been having with live migration periodically giving me errors that it cannot suspend has perhaps been found. I''ve not performed a live migration about 14 times now without it failing with this change in place. Is this enough information for someone to figure out what the real cure should be? I''m starting to think that shutdown_handler should not call ctrl_if_send_response if it is a suspend request and no previous suspend request was pending, else call ctrl_if_send_response. But I''d just be guessing. -- Ray -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com]On Behalf Of Cole, Ray Sent: Wednesday, August 31, 2005 3:25 PM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem Looks like the suspend message is received in the shutdown handler. schedule_work is called to schedule the work but, sporadically, that work is never executed. It is as if schedule_work doesn''t really schedule it or it is unable to get executed. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com]On Behalf Of Cole, Ray Sent: Wednesday, August 31, 2005 12:41 PM To: Steven Hand Cc: xen-users@lists.xensource.com Subject: RE: [Xen-users] Live migration problem I decided to put in some printk''s into reboot.c''s __do_suspend. During a "good" live migration run I see the printk''s show up on the console. In the bad one I see that __do_suspend never gets called :-( I''ll continue to follow it up the chain to see if it never gets the message to suspend at all or if something is going bad between getting the message and suspending. I''m running xen-2.0-testing with the xen-2.0 2.6.11.12-xenU kernel BTW. -- Ray _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users