So far we haven''t been very good about writing down our freeze policy. Here is a proposal with some suggestions and projected dates: 31 Dec Feature submission freeze Ie, deadline for first version of new feature patches. New feature patches posted before this point can be committed afterwards if they need the time to get into shape. Anything not posted at all before this point has missed the boat. All queued patches applied, but no later than 14 Jan Feature code freeze New feature patches not committed after this point; anything intended for 4.1 not in a good final form by this point has missed the boat. RC1 at this point, unless serious or blocking bugs are present. 31 Jan Slushy code freeze No further patches unless they either - fix an important bug; or - are very trivial and low-risk. All patches to receive an ack from a maintainer other than the committer. 14 Feb Hard code freeze No further patches other than for release-critical bugs. All patches to receive an ack from a maintainer other than the committer. 28 Feb Release. In all cases, the plan is subject to modification, and to the making of freeze exceptions by the maintainers following consultation. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
After consultation with Keir, here''s a revised proposal: Feature submission freeze after 31 Dec New feature patches posted before this point can be committed afterwards if they needed the time to get into shape. New features not previously posted have missed the boat. Bugfixes are allowed. Feature code freeze after all queued patches applied but at the latest 14 Jan RC1 at this point, unless serious or blocking bugs are present. Bugfixes are allowed provided they are not high-risk. No new features will be committed. Slushy code freeze after 28 Jan Bugfixes are allowed but only if they either: - fix an important bug; or - are very trivial and low-risk. All changes to receive a formal ack from a maintainer other than the committer. Hard code freeze after 7 Feb No further patches other than for release-critical bugs. All changes to receive a formal ack from a maintainer other than the committer. Release. planned for 14 Feb In all cases, the plan is subject to modification, and to the making of freeze exceptions by the maintainers following consultation. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
I wrote:> Feature code freeze > after all queued patches appliedAll the outstanding patches from before the feature submission freeze are now in xen-unstable. We hope to tag Xen 4.1.0 RC1 tomorrow, assuming we get a pass and push from the automatic tests.>From this point on:> Bugfixes are allowed provided they are not high-risk. > > No new features will be committed.New features, and high-risk or intrusive bugfixes, will be deferred to the 4.2 release cycle. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 11.01.11 at 20:43, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote: > I wrote: >> Feature code freeze >> after all queued patches applied > > All the outstanding patches from before the feature submission freeze > are now in xen-unstable. We hope to tag Xen 4.1.0 RC1 tomorrow, > assuming we get a pass and push from the automatic tests.That''s the main thing - getting currently queued things pushed out of staging. But the latest test results look all but promising. I''m actually having one larger cleanup patch (eliminating over 300 lines of defacto dead code) pending which I had hoped to get into 4.1, but which I wouldn''t want to submit without having applied/ built/run on top of up-to-date non-staging bits. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 11.01.11 at 19:28, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote: > After consultation with Keir, here''s a revised proposal: > > Feature submission freeze > after 31 Dec > > New feature patches posted before this point can be committed > afterwards if they needed the time to get into shape. > > New features not previously posted have missed the boat.Based on 4.0 experience, I''m afraid we''ll have to default-disable tmem again for 4.1, since (to my knowledge) there hasn''t been much (if any) work to eliminate non-order-0 post-boot allocations. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 12/01/2011 08:11, "Jan Beulich" <JBeulich@novell.com> wrote:>> All the outstanding patches from before the feature submission freeze >> are now in xen-unstable. We hope to tag Xen 4.1.0 RC1 tomorrow, >> assuming we get a pass and push from the automatic tests. > > That''s the main thing - getting currently queued things pushed out > of staging. But the latest test results look all but promising.My suspicion is that this is due to a dom0 kernel issue. Not sure how sensible it is to try to stabilise Xen bits against a moving kernel. Does pv_ops have an actually stable ''stable'' branch yet?> I''m actually having one larger cleanup patch (eliminating over 300 > lines of defacto dead code) pending which I had hoped to get into > 4.1, but which I wouldn''t want to submit without having applied/ > built/run on top of up-to-date non-staging bits.If it doesn''t fix bugs or implement something new we really care about a lot, it can wait for 4.2 development to start, I''m sure. That said I expect you can in fact test your patch against a known-good dom0 kernel and latest xen-unstable bits. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser writes ("Re: [Xen-devel] Re: Freeze schedule"):> My suspicion is that this is due to a dom0 kernel issue. Not sure how > sensible it is to try to stabilise Xen bits against a moving kernel. Does > pv_ops have an actually stable ''stable'' branch yet?We are using the pvops nominally-stable branch and furthermore, the automated tests have a separate push gate for the dom0 kernel. And, the same boot failure happens with the XCP kernel too. So I''m pretty sure it''s Xen problem. See a stack trace from one of the logs, below. Ian. From, eg, http://www.chiark.greenend.org.uk/~xensrcts/logs/4899/test-amd64-i386-xl/info.html for which the corresponding build (including symbols files) is at: http://www.chiark.greenend.org.uk/~xensrcts/logs/4899/build-amd64/info.html specifically http://www.chiark.greenend.org.uk/~xensrcts/logs/4899/build-amd64/build/xendist.tar.gz Jan 12 03:06:42 (XEN) Early fatal page fault at e008:ffff82c480114c9f (cr2=ffff82c400000404, ecJan 12 03:06:42 0002) Jan 12 03:06:42 (XEN) Stack dump: 0000000000000000 ffff82f600000000 ffff82f600002020 00000000000 Jan 12 03:06:42 00000 ffff82c480297d98 0000000000000101 0180000000000000 0100000000000000 ffff82 Jan 12 03:06:42 c400000404 ffff828000000808 0000000000000000 0000000000000000 ffffffffffffffff f Jan 12 03:06:42 fff82f600002020 0000000000000000 0000000000000002 ffff82c480114c9f 000000000000e Jan 12 03:06:42 008 0000000000010093 ffff82c480297d48 0000000000000000 ffff82c480114bd0 ffff82c4 Jan 12 03:06:42 802cb580 0000000100000001 0000000000003000 0000000000000008 0000000000000008 000 Jan 12 03:06:42 0000000000000 0000000000101000 ffff82c4802abd00 ffff82f600002020 0000000000000ef Jan 12 03:06:42 f ffff82c480297de8 ffff82c480116a6a 0000000000000040 0000000000000000 0000000000 Jan 12 03:06:42 000fff ffff830000100000 0000000000000000 0000000000000000 ffff83021b74e000 fffff Jan 12 03:06:42 fffffffffff ffff82c480297e28 ffff82c480260e79 0000000080297e28 00000000001ffe2a Jan 12 03:06:42 0000000000000002 000004ffffffffff ffff830000000000 000000000000000d ffff82c48029 Jan 12 03:06:42 7f08 ffff82c48027ae6a 00000000002d4d80 0000000000000000 000000000007bf30 0000000 Jan 12 03:06:42 000000000 0000000000000000 ffff83000007bd70 ffff83000007bfb0 ffff83000007bf30 00 Jan 12 03:06:42 00000000868000 0100000000000000 00000000dfa00000 0000000000000000 00000000000000 Jan 12 03:06:42 00 ffff82c48028b168 ffffffff00000000 0000000001000000 0000000800000000 000000010 Jan 12 03:06:42 000006e 0000000000000003 00000000000002f8 0000000000000000 0000000000000000 0000 Jan 12 03:06:42 000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000067e9c Jan 12 03:06:42 ffff82c4801000b5 0000000000000000 0000000000000000 0000000000000000 00000000000 Jan 12 03:06:42 00000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000 Jan 12 03:06:42 0000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0 Jan 12 03:06:42 000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000 Jan 12 03:06:42 000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 00000000 Jan 12 03:06:42 00000000 0000000000000000 0000000000000000 0000000000000000 00000000fffff000 000 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 12/01/2011 11:29, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:> Keir Fraser writes ("Re: [Xen-devel] Re: Freeze schedule"): >> My suspicion is that this is due to a dom0 kernel issue. Not sure how >> sensible it is to try to stabilise Xen bits against a moving kernel. Does >> pv_ops have an actually stable ''stable'' branch yet? > > We are using the pvops nominally-stable branch and furthermore, the > automated tests have a separate push gate for the dom0 kernel. > > And, the same boot failure happens with the XCP kernel too. So I''m > pretty sure it''s Xen problem. > > See a stack trace from one of the logs, below.Yep, that one''s fixed now. Not sure why I didn''t find that in the logs. I found what looked like a hang during dom0 kernel boot, but I must have been looking at the wrong thing. -- Keir> Ian. > > From, eg, > > http://www.chiark.greenend.org.uk/~xensrcts/logs/4899/test-amd64-i386-xl/info. > html > for which the corresponding build (including symbols files) is at: > http://www.chiark.greenend.org.uk/~xensrcts/logs/4899/build-amd64/info.html > specifically > > http://www.chiark.greenend.org.uk/~xensrcts/logs/4899/build-amd64/build/xendis > t.tar.gz > > Jan 12 03:06:42 (XEN) Early fatal page fault at e008:ffff82c480114c9f > (cr2=ffff82c400000404, ec> Jan 12 03:06:42 0002) > Jan 12 03:06:42 (XEN) Stack dump: 0000000000000000 ffff82f600000000 > ffff82f600002020 00000000000 > Jan 12 03:06:42 00000 ffff82c480297d98 0000000000000101 0180000000000000 > 0100000000000000 ffff82 > Jan 12 03:06:42 c400000404 ffff828000000808 0000000000000000 0000000000000000 > ffffffffffffffff f > Jan 12 03:06:42 fff82f600002020 0000000000000000 0000000000000002 > ffff82c480114c9f 000000000000e > Jan 12 03:06:42 008 0000000000010093 ffff82c480297d48 0000000000000000 > ffff82c480114bd0 ffff82c4 > Jan 12 03:06:42 802cb580 0000000100000001 0000000000003000 0000000000000008 > 0000000000000008 000 > Jan 12 03:06:42 0000000000000 0000000000101000 ffff82c4802abd00 > ffff82f600002020 0000000000000ef > Jan 12 03:06:42 f ffff82c480297de8 ffff82c480116a6a 0000000000000040 > 0000000000000000 0000000000 > Jan 12 03:06:42 000fff ffff830000100000 0000000000000000 0000000000000000 > ffff83021b74e000 fffff > Jan 12 03:06:42 fffffffffff ffff82c480297e28 ffff82c480260e79 0000000080297e28 > 00000000001ffe2a > Jan 12 03:06:42 0000000000000002 000004ffffffffff ffff830000000000 > 000000000000000d ffff82c48029 > Jan 12 03:06:42 7f08 ffff82c48027ae6a 00000000002d4d80 0000000000000000 > 000000000007bf30 0000000 > Jan 12 03:06:42 000000000 0000000000000000 ffff83000007bd70 ffff83000007bfb0 > ffff83000007bf30 00 > Jan 12 03:06:42 00000000868000 0100000000000000 00000000dfa00000 > 0000000000000000 00000000000000 > Jan 12 03:06:42 00 ffff82c48028b168 ffffffff00000000 0000000001000000 > 0000000800000000 000000010 > Jan 12 03:06:42 000006e 0000000000000003 00000000000002f8 0000000000000000 > 0000000000000000 0000 > Jan 12 03:06:42 000000000000 0000000000000000 0000000000000000 > 0000000000000000 0000000000067e9c > Jan 12 03:06:42 ffff82c4801000b5 0000000000000000 0000000000000000 > 0000000000000000 00000000000 > Jan 12 03:06:42 00000 0000000000000000 0000000000000000 0000000000000000 > 0000000000000000 000000 > Jan 12 03:06:42 0000000000 0000000000000000 0000000000000000 0000000000000000 > 0000000000000000 0 > Jan 12 03:06:42 000000000000000 0000000000000000 0000000000000000 > 0000000000000000 0000000000000 > Jan 12 03:06:42 000 0000000000000000 0000000000000000 0000000000000000 > 0000000000000000 00000000 > Jan 12 03:06:42 00000000 0000000000000000 0000000000000000 0000000000000000 > 00000000fffff000 000_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2011-Jan-12 18:32 UTC
RE: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
> >>> On 11.01.11 at 19:28, Ian Jackson <Ian.Jackson@eu.citrix.com> > wrote: > > After consultation with Keir, here''s a revised proposal: > > > > Feature submission freeze > > after 31 Dec > > > > New feature patches posted before this point can be committed > > afterwards if they needed the time to get into shape. > > > > New features not previously posted have missed the boat. > > Based on 4.0 experience, I''m afraid we''ll have to default-disable > tmem again for 4.1, since (to my knowledge) there hasn''t been > much (if any) work to eliminate non-order-0 post-boot allocations.I haven''t tested it due to other commitments, but didn''t someone (Tim?) submit a patch to change shadow tables to use order-0, and Keir submit a patch to change domain struct to order-0? IIRC, that''s not everything... I think passthrough uses order>0 still... but I assumed the vast majority of the problem was solved. If there''s evidence to the contrary, I am OK with default-disable. Until I manage to fight off all the anti-Xen alligators and get the relatively small tmem changes upstream (c.f. cleancache and frontswap and kztmem in lkml), the number of Xen tmem users will remain smallish. Dan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 12.01.11 at 19:32, Dan Magenheimer <dan.magenheimer@oracle.com> wrote: >> >>> On 11.01.11 at 19:28, Ian Jackson <Ian.Jackson@eu.citrix.com> >> wrote: >> > After consultation with Keir, here''s a revised proposal: >> > >> > Feature submission freeze >> > after 31 Dec >> > >> > New feature patches posted before this point can be committed >> > afterwards if they needed the time to get into shape. >> > >> > New features not previously posted have missed the boat. >> >> Based on 4.0 experience, I''m afraid we''ll have to default-disable >> tmem again for 4.1, since (to my knowledge) there hasn''t been >> much (if any) work to eliminate non-order-0 post-boot allocations. > > I haven''t tested it due to other commitments, but didn''t someone > (Tim?) submit a patch to change shadow tables to use order-0, > and Keir submit a patch to change domain struct to order-0?alloc_{domain,vcpu}_struct() use order 1, and both containing one or more instances of cpumask_t this size is configuration dependent.> IIRC, that''s not everything... I think passthrough uses order>0 > still... but I assumed the vast majority of the problem was solved.Yes, pass-through is one violator, domain_create() is another, all but one allocating d->nr_pirqs sized arrays (the one other case being even worse in allocating a nr_irqs sized array of struct timer). Only the shadow mode case was addressed iirc. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2011-Jan-19 16:53 UTC
[PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
# HG changeset patch # User Tim Deegan <Tim.Deegan@citrix.com> # Date 1295455677 0 # Node ID 497a764d9314a4061440938815f67ef737afc8f0 # Parent fe8a177ae9cb01cda771ba64cea2c0470cf79cd8 Disable tmem by default for 4.1 release. Although one major source of order>0 allocations has been removed, others still remain, so re-disable tmem until the issue can be fixed properly. Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> diff -r fe8a177ae9cb -r 497a764d9314 xen/common/tmem_xen.c --- a/xen/common/tmem_xen.c Wed Jan 19 15:29:04 2011 +0000 +++ b/xen/common/tmem_xen.c Wed Jan 19 16:47:57 2011 +0000 @@ -15,7 +15,7 @@ #define EXPORT /* indicates code other modules are dependent upon */ -EXPORT bool_t __read_mostly opt_tmem = 1; +EXPORT bool_t __read_mostly opt_tmem = 0; boolean_param("tmem", opt_tmem); EXPORT bool_t __read_mostly opt_tmem_compress = 0; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2011-Jan-19 21:38 UTC
RE: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
> Subject: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)> Disable tmem by default for 4.1 release. > > Although one major source of order>0 allocations has been removed, > others still remain, so re-disable tmem until the issue can be fixed > properly. > > Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> > > diff -r fe8a177ae9cb -r 497a764d9314 xen/common/tmem_xen.c > --- a/xen/common/tmem_xen.c Wed Jan 19 15:29:04 2011 +0000 > +++ b/xen/common/tmem_xen.c Wed Jan 19 16:47:57 2011 +0000 > @@ -15,7 +15,7 @@ > > #define EXPORT /* indicates code other modules are dependent upon */ > > -EXPORT bool_t __read_mostly opt_tmem = 1; > +EXPORT bool_t __read_mostly opt_tmem = 0; > boolean_param("tmem", opt_tmem);Just to check again, has anyone actually seen a problem with tmem enabled by default recently? I agree that there is still theoretically a problem, but there is the same problem with normal guests doing lots of ballooning as well. Also, note that even if tmem defaults to enabled, the problem is impossible unless a guest enables tmem (or, in the case of SuSE, dom0). And even if a guest does enable tmem, the problem manifested largely due to shadow pages using order>0 (now fixed?)... failure on domain creation can happen for many reasons and is much less of an issue, true? Feel free to shoot me down with more evidence, but I have to at least provide token resistance to this patch. Distros might certainly choose to disable it to avoid any risk at all, but turning it off anymore seems overkill for xen.org open source Xen IMHO. Dan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Sander Eikelenboom
2011-Jan-19 21:51 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
Except for xm dmesg being pretty filled with messages (could be due to loglvl all): (XEN) [2011-01-16 12:48:33] tmem: all pools frozen for all domains (XEN) [2011-01-16 12:48:33] tmem: all pools thawed for all domains (XEN) [2011-01-16 12:48:44] tmem: all pools frozen for all domains (XEN) [2011-01-16 12:48:44] tmem: all pools thawed for all domains etc. etc. etc. I haven''t seen any problems so far (without enabling it in guests and testing the actual functionality) -- Sander Wednesday, January 19, 2011, 10:38:09 PM, you wrote:>> Subject: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)>> Disable tmem by default for 4.1 release. >> >> Although one major source of order>0 allocations has been removed, >> others still remain, so re-disable tmem until the issue can be fixed >> properly. >> >> Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com> >> >> diff -r fe8a177ae9cb -r 497a764d9314 xen/common/tmem_xen.c >> --- a/xen/common/tmem_xen.c Wed Jan 19 15:29:04 2011 +0000 >> +++ b/xen/common/tmem_xen.c Wed Jan 19 16:47:57 2011 +0000 >> @@ -15,7 +15,7 @@ >> >> #define EXPORT /* indicates code other modules are dependent upon */ >> >> -EXPORT bool_t __read_mostly opt_tmem = 1; >> +EXPORT bool_t __read_mostly opt_tmem = 0; >> boolean_param("tmem", opt_tmem);> Just to check again, has anyone actually seen a problem with > tmem enabled by default recently? I agree that there is still > theoretically a problem, but there is the same problem with > normal guests doing lots of ballooning as well. Also, note > that even if tmem defaults to enabled, the problem is impossible > unless a guest enables tmem (or, in the case of SuSE, dom0). > And even if a guest does enable tmem, the problem manifested > largely due to shadow pages using order>0 (now fixed?)... > failure on domain creation can happen for many reasons and > is much less of an issue, true?> Feel free to shoot me down with more evidence, but I have > to at least provide token resistance to this patch. Distros > might certainly choose to disable it to avoid any risk at > all, but turning it off anymore seems overkill for xen.org > open source Xen IMHO.> Dan-- Best regards, Sander mailto:linux@eikelenboom.it _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2011-Jan-20 07:17 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
On 19/01/2011 21:38, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:> Just to check again, has anyone actually seen a problem with > tmem enabled by default recently? I agree that there is still > theoretically a problem, but there is the same problem with > normal guests doing lots of ballooning as well. Also, note > that even if tmem defaults to enabled, the problem is impossible > unless a guest enables tmem (or, in the case of SuSE, dom0). > And even if a guest does enable tmem, the problem manifested > largely due to shadow pages using order>0 (now fixed?)... > failure on domain creation can happen for many reasons and > is much less of an issue, true? > > Feel free to shoot me down with more evidence, but I have > to at least provide token resistance to this patch. Distros > might certainly choose to disable it to avoid any risk at > all, but turning it off anymore seems overkill for xen.org > open source Xen IMHO.Tbh I was wondering whether anyone is really using it in earnest. No upstream kernels support it? If noone''s using it, who really cares whether it''s enabled or not, apart from its author. -- Keir> Dan_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olivier B.
2011-Jan-20 08:07 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
Le 20/01/2011 08:17, Keir Fraser a écrit :> On 19/01/2011 21:38, "Dan Magenheimer"<dan.magenheimer@oracle.com> wrote: > >> Just to check again, has anyone actually seen a problem with >> tmem enabled by default recently? I agree that there is still >> theoretically a problem, but there is the same problem with >> normal guests doing lots of ballooning as well. Also, note >> that even if tmem defaults to enabled, the problem is impossible >> unless a guest enables tmem (or, in the case of SuSE, dom0). >> And even if a guest does enable tmem, the problem manifested >> largely due to shadow pages using order>0 (now fixed?)... >> failure on domain creation can happen for many reasons and >> is much less of an issue, true? >> >> Feel free to shoot me down with more evidence, but I have >> to at least provide token resistance to this patch. Distros >> might certainly choose to disable it to avoid any risk at >> all, but turning it off anymore seems overkill for xen.org >> open source Xen IMHO. > > Tbh I was wondering whether anyone is really using it in earnest. No > upstream kernels support it? If noone''s using it, who really cares whether > it''s enabled or not, apart from its author. > > -- Keir >Well, it is present but disabled in the new Debian Squeeze kernel... and as there is no documentation, there is no chance to see it enabled. I had to search on the xen lists to find how to enable that. For example : http://www.google.fr/search?hl=fr&source=hp&q=tmem_compress We should maybe add info about tmem on the Xen Wiki, no ? Olivier _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Jan-20 08:23 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
On Thu, 2011-01-20 at 08:07 +0000, Olivier B. wrote:> Well, it is present but disabled in the new Debian Squeeze kernel...Do you really mean the kernel (i.e. Linux) or do you really mean the hypervisor? I was not aware of any tmem support (disabled or otherwise) in the Debian Squeeze Linux _kernel_. It hasn''t been deliberately patched in and it does not appear to be in the xen.git snapshot used in the Xen dom0 flavour of the kernel. You are right that it is present and disabled in the Squeeze hypervisor -- this is simply the default in the Xen 4.0.1 release and hasn''t been tweaked by Debian. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olivier B.
2011-Jan-20 09:13 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
Le 20/01/2011 09:23, Ian Campbell a écrit :> On Thu, 2011-01-20 at 08:07 +0000, Olivier B. wrote: >> Well, it is present but disabled in the new Debian Squeeze kernel... > Do you really mean the kernel (i.e. Linux) or do you really mean the > hypervisor? > > I was not aware of any tmem support (disabled or otherwise) in the > Debian Squeeze Linux _kernel_. It hasn''t been deliberately patched in > and it does not appear to be in the xen.git snapshot used in the Xen > dom0 flavour of the kernel. > > You are right that it is present and disabled in the Squeeze hypervisor > -- this is simply the default in the Xen 4.0.1 release and hasn''t been > tweaked by Debian. > > Ian. >I don''t know, on a Squeeze Dom0 with "tmem" parameters on default hypervisor and kernel, I have that : root! yunze:~# xm tmem-list -a G=Tt:39,Te:0,Cf:0,Af:0,Pf:0,Ta:0,Lm:0,Et:0,Ea:0,Rt:0,Ra:0,Rx:0,Fp:0 T=Gn:0,Gt:0,Gx:0,Gm:2147483647,Pn:0,Pt:0,Px:0,Pm:2147483647,gn:0,gt:0,gx:0,gm:2147483647,pn:0,pt:0,px:0,pm:2147483647,Fn:0,Ft:0,Fx:0,Fm:2147483647,On:0,Ot:0,Ox:0,Om:2147483647,Cn:0,Ct:0,Cx:0,Cm:2147483647,cn:0,ct:0,cx:0,cm:2147483647,dn:0,dt:0,dx:0,dm:2147483647 Is it "suffisant" to say that it works ? but : root! yunze:~# xm list 9 Name ID Mem VCPUs State Time(s) rome 9 512 2 -b---- 11.8 root! yunze:~# xm tmem-list 9 -1 and from the VM : root! rome:~# cat /proc/cmdline root=/dev/mapper/vg--rome-root ro rootflags=data=writeback rootdelay=1 panic=60 swiotlb=force iommu=soft tmem root! rome:~# uname -a Linux rome 2.6.32-5-xen-amd64 #1 SMP Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux But as I was saying, this is difficult to find any documentation, so I really don''t know what is wrong or not... Olivier _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2011-Jan-20 09:31 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
At 21:38 +0000 on 19 Jan (1295473089), Dan Magenheimer wrote:> Just to check again, has anyone actually seen a problem with > tmem enabled by default recently? I agree that there is still > theoretically a problem, but there is the same problem with > normal guests doing lots of ballooning as well. Also, note > that even if tmem defaults to enabled, the problem is impossible > unless a guest enables tmem (or, in the case of SuSE, dom0).I thought we already had this discussion and you agreed to disable tmem. I only posted a patch because IanJ pointed out that we hadn''t actually done it.> And even if a guest does enable tmem, the problem manifested > largely due to shadow pages using order>0 (now fixed?)...Yes, that''s now fixed. Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, Xen Platform Team Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jan Beulich
2011-Jan-20 10:04 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
>>> On 20.01.11 at 08:17, Keir Fraser <keir@xen.org> wrote: > On 19/01/2011 21:38, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote: > >> Just to check again, has anyone actually seen a problem with >> tmem enabled by default recently? I agree that there is still >> theoretically a problem, but there is the same problem with >> normal guests doing lots of ballooning as well. Also, note >> that even if tmem defaults to enabled, the problem is impossible >> unless a guest enables tmem (or, in the case of SuSE, dom0). >> And even if a guest does enable tmem, the problem manifested >> largely due to shadow pages using order>0 (now fixed?)... >> failure on domain creation can happen for many reasons and >> is much less of an issue, true? >> >> Feel free to shoot me down with more evidence, but I have >> to at least provide token resistance to this patch. Distros >> might certainly choose to disable it to avoid any risk at >> all, but turning it off anymore seems overkill for xen.org >> open source Xen IMHO. > > Tbh I was wondering whether anyone is really using it in earnest. No > upstream kernels support it? If noone''s using it, who really cares whether > it''s enabled or not, apart from its author.As Dan wrote, all our kernels 2.6.31 and newer use it if the hypervisor has it enabled, which was also the reason why we noticed the problems it being enabled by default caused during the 4.0 release cycle. Since the 2.6.32 XCP kernel is derived from ours and nothing in the patch queue there removes the tmem bits afaict, it ought to be affected as much. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Jan-20 10:18 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
On Thu, 2011-01-20 at 10:04 +0000, Jan Beulich wrote:> > > Since the 2.6.32 XCP kernel is derived from ours and > nothing in the patch queue there removes the tmem bits > afaict, it ought to be affected as much.Neither CONFIG_PRECACHE nor CONFIG_PRESWAP are enabled in the XCP kernel configuration and hence CONFIG_TMEM is not selected. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2011-Jan-20 22:49 UTC
RE: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
> At 21:38 +0000 on 19 Jan (1295473089), Dan Magenheimer wrote: > > Just to check again, has anyone actually seen a problem with > > tmem enabled by default recently? I agree that there is still > > theoretically a problem, but there is the same problem with > > normal guests doing lots of ballooning as well. Also, note > > that even if tmem defaults to enabled, the problem is impossible > > unless a guest enables tmem (or, in the case of SuSE, dom0). > > I thought we already had this discussion and you agreed to disable > tmem. > I only posted a patch because IanJ pointed out that we hadn''t actually > done it.Sorry, I guess I had second thoughts/reservations even just after my previous reply, but didn''t get around to replying to my own reply, until I saw your reply. (Yes, I''m still trying to parse that too ;-) Thanks, Dan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Jackson
2011-Jan-21 17:32 UTC
Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)
Keir Fraser writes ("Re: [PATCH] Re: tmem on 4.1 (was [Xen-devel] Re: Freeze schedule)"):> Tbh I was wondering whether anyone is really using it in earnest. No > upstream kernels support it? If noone''s using it, who really cares whether > it''s enabled or not, apart from its author.If using it can break order>0 allocations, some of which we still have, then perhaps we don''t want it enabled because it''s an opportunity for DoS by a guest ? Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel