Dan, having taken a fresh snapshot of xen-unstable today, I start seeing quite a number of these messages during late Xen and early Dom0 boot. They don''t seem to be very meaningful (according to the stack traces I created for some of them to understand where they originate), hence I wonder whether they couldn''t get silenced. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2010-Jan-06 17:13 UTC
[Xen-devel] [PATCH] less verbose tmem, was: tmem_relinquish_page: failing order=<n>
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@novell.com] > Subject: tmem_relinquish_page: failing order=<n> > Dan, > > having taken a fresh snapshot of xen-unstable today, I start > seeing quite > a number of these messages during late Xen and early Dom0 boot. They > don''t seem to be very meaningful (according to the stack traces I > created for some of them to understand where they originate), hence I > wonder whether they couldn''t get silenced.Hi Jan -- The message is relevant if any code calling alloc_heap_pages() for order>0 isn''t able to fallback to order=0. All usages today in Xen can fallback, but future calls may not, so I''d prefer to keep the printk there at least in xen-unstable. BUT there''s no reason for the message to be logged in a released Xen so here''s a patch to ifndef NDEBUG the printk. NOTE TO KEIR: Reminder that cset 19939 causes debug=y to be the default build so you''ll need to turn that off before the final 4.0.0 bits (and/or turn it back on after xen-unstable forks for 4.0.0). Signed-off by: Dan Magenheimer <dan.magenheimer@oracle.com> diff -r 4feec90815a0 xen/common/tmem.c --- a/xen/common/tmem.c Tue Jan 05 08:40:18 2010 +0000 +++ b/xen/common/tmem.c Wed Jan 06 10:08:01 2010 -0700 @@ -2483,7 +2483,9 @@ EXPORT void *tmem_relinquish_pages(unsig relinq_attempts++; if ( order > 0 ) { +#ifndef NDEBUG printk("tmem_relinquish_page: failing order=%d\n", order); +#endif return NULL; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jan Beulich
2010-Jan-07 15:07 UTC
[Xen-devel] Re: [PATCH] less verbose tmem, was: tmem_relinquish_page: failing order=<n>
>>> Dan Magenheimer <dan.magenheimer@oracle.com> 06.01.10 18:13 >>> >The message is relevant if any code calling alloc_heap_pages() >for order>0 isn''t able to fallback to order=0. All usages today >in Xen can fallback, but future calls may not, so I''d prefer >to keep the printk there at least in xen-unstable. BUT there''sWhat makes you think so? Iirc there are several xmalloc()-s of more than a page in size (which ultimately will call alloc_heap_pages()), and those usually don''t have a fallback.>no reason for the message to be logged in a released Xen >so here''s a patch to ifndef NDEBUG the printk.Even in a debug build this may be really annoying: On NUMA machines, the dma_bitsize mechanism (to avoid exhausting DMA memory) can cause close to 2000 of these messages when allocating Dom0''s initial memory (in the absence of a severely restricting dom0_mem, and with node 0 spanning 4G or more). This makes booting unacceptably slow, especially when using a graphical console mode. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2010-Jan-07 17:02 UTC
[Xen-devel] RE: [PATCH] less verbose tmem, was: tmem_relinquish_page: failing order=<n>
> From: Jan Beulich [mailto:JBeulich@novell.com] > Subject: Re: [PATCH] less verbose tmem, was: tmem_relinquish_page: > failing order=<n> > > >>> Dan Magenheimer <dan.magenheimer@oracle.com> 06.01.10 18:13 >>> > >The message is relevant if any code calling alloc_heap_pages() > >for order>0 isn''t able to fallback to order=0. All usages today > >in Xen can fallback, but future calls may not, so I''d prefer > >to keep the printk there at least in xen-unstable. BUT there''s > > What makes you think so? Iirc there are several xmalloc()-s of more > than a page in size (which ultimately will call alloc_heap_pages()), > and those usually don''t have a fallback.I don''t know this for a fact, but had discussed it in xen-devel some time ago (maybe over a year ago). Since tmem doesn''t do anything until at least one tmem-modified guest uses it, the only issue is if launching a subsequent domain requires an allocation of order>0. Since Xen doesn''t have any kind of memory defragmenter, any such requirement is perilous at best.> >no reason for the message to be logged in a released Xen > >so here''s a patch to ifndef NDEBUG the printk. > > Even in a debug build this may be really annoying: On NUMA machines, > the dma_bitsize mechanism (to avoid exhausting DMA memory) can > cause close to 2000 of these messages when allocating Dom0''s > initial memory (in the absence of a severely restricting dom0_mem, > and with node 0 spanning 4G or more). This makes booting > unacceptably slow, especially when using a graphical console mode.OK, I can take the patch one step further by only doing the printk if tmem has been used at least once. Let me take a look at that. Dan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2010-Jan-07 17:51 UTC
[Xen-devel] RE: [PATCH] less verbose tmem, was: tmem_relinquish_page: failing order=<n>
(apologies if this is a repeat; having email problems)> From: Jan Beulich [mailto:JBeulich@novell.com] > Subject: Re: [PATCH] less verbose tmem, was: tmem_relinquish_page: > failing order=<n> > > >>> Dan Magenheimer <dan.magenheimer@oracle.com> 06.01.10 18:13 >>> > >The message is relevant if any code calling alloc_heap_pages() > >for order>0 isn''t able to fallback to order=0. All usages today > >in Xen can fallback, but future calls may not, so I''d prefer > >to keep the printk there at least in xen-unstable. BUT there''s > > What makes you think so? Iirc there are several xmalloc()-s of more > than a page in size (which ultimately will call alloc_heap_pages()), > and those usually don''t have a fallback.I don''t know this for a fact, but had discussed it in xen-devel some time ago (maybe over a year ago). Since tmem doesn''t do anything until at least one tmem-modified guest uses it, the only issue is if launching a subsequent domain requires an allocation of order>0. Since Xen doesn''t have any kind of memory defragmenter, any such requirement is perilous at best.> >no reason for the message to be logged in a released Xen > >so here''s a patch to ifndef NDEBUG the printk. > > Even in a debug build this may be really annoying: On NUMA machines, > the dma_bitsize mechanism (to avoid exhausting DMA memory) can > cause close to 2000 of these messages when allocating Dom0''s > initial memory (in the absence of a severely restricting dom0_mem, > and with node 0 spanning 4G or more). This makes booting > unacceptably slow, especially when using a graphical console mode.OK, I can take the patch one step further by only doing the printk if tmem has been used at least once. Let me take a look at that. Dan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel