Three more minor corrections to trace.c. The formula used the already calculated offset of the array for multiplication, and the max number of pages per cpu could wrap. However, its not possible to allocate the MAX_ORDER (which is 9 on x86) for each cpu (just for verification) on a box with 8 cpus. If booted with tbuf_size=8192 alloc fails on the 4th cpu, at runtime already on the second. What is the reason for that, which pool of pages is too small? Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-30 18:04 UTC
[Xen-devel] [PATCH 1 of 3] xentrace: correct formula to calculate t_info_pages
# HG changeset patch # User Olaf Hering <olaf@aepfle.de> # Date 1301423840 -7200 # Node ID 8a2ce5e49b2c5f2e013734b5d53eae37572f4101 # Parent 45eeeb6d0481efaab2a59941e1b8e061aead37d4 xentrace: correct formula to calculate t_info_pages The current formula to calculate t_info_pages, based on the initial code, is slightly incorrect. It may allocate more than needed. Each cpu has some pages/mfns stored as uint32_t. That list is stored with an offset at tinfo. Signed-off-by: Olaf Hering <olaf@aepfle.de> --- xen/common/trace.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff -r 45eeeb6d0481 -r 8a2ce5e49b2c xen/common/trace.c --- a/xen/common/trace.c Tue Mar 29 16:34:01 2011 +0100 +++ b/xen/common/trace.c Tue Mar 29 20:37:20 2011 +0200 @@ -110,7 +110,7 @@ { struct t_buf dummy; typeof(dummy.prod) size; - unsigned int t_info_words, t_info_bytes; + unsigned int t_info_words; /* force maximum value for an unsigned type */ size = -1; @@ -125,9 +125,8 @@ pages = size; } - t_info_words = num_online_cpus() * pages + t_info_first_offset; - t_info_bytes = t_info_words * sizeof(uint32_t); - t_info_pages = PFN_UP(t_info_bytes); + t_info_words = num_online_cpus() * pages * sizeof(uint32_t); + t_info_pages = PFN_UP(t_info_first_offset + t_info_words); printk(XENLOG_INFO "xentrace: requesting %u t_info pages " "for %u trace pages on %u cpus\n", t_info_pages, pages, num_online_cpus()); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-30 18:04 UTC
[Xen-devel] [PATCH 2 of 3] xentrace: use tbuf_size for overflow check
# HG changeset patch # User Olaf Hering <olaf@aepfle.de> # Date 1301424075 -7200 # Node ID 3e95e737bc51c2295926e4f23389b1cb161d6d7b # Parent 8a2ce5e49b2c5f2e013734b5d53eae37572f4101 xentrace: use tbuf_size for overflow check The calculated number of per-cpu trace pages is stored in t_info and shared with tools like xentrace. Since its an u16 the value may overflow because the current check is based on u32. Using the u16 means each cpu could in theory use up to 256MB as trace buffer. However such a large allocation will currently fail on x86 due to the MAX_ORDER limit. Signed-off-by: Olaf Hering <olaf@aepfle.de> diff -r 8a2ce5e49b2c -r 3e95e737bc51 xen/common/trace.c --- a/xen/common/trace.c Tue Mar 29 20:37:20 2011 +0200 +++ b/xen/common/trace.c Tue Mar 29 20:41:15 2011 +0200 @@ -104,25 +104,26 @@ * calculate_tbuf_size - check to make sure that the proposed size will fit * in the currently sized struct t_info and allows prod and cons to * reach double the value without overflow. + * tbuf_size is u16, while t_buf prod/cons are u32, so using tbuf_size for + * overflow check is good. + * The t_info layout is fixed and cant be changed without breaking xentrace. * Initialize t_info_pages based on number of trace pages. */ static int calculate_tbuf_size(unsigned int pages) { - struct t_buf dummy; - typeof(dummy.prod) size; + struct t_info dummy; + typeof(dummy.tbuf_size) max_pages; unsigned int t_info_words; /* force maximum value for an unsigned type */ - size = -1; + max_pages = -1; - /* max size holds up to n pages */ - size /= PAGE_SIZE; - if ( pages > size ) + if ( pages > max_pages ) { printk(XENLOG_INFO "xentrace: requested number of %u pages " "reduced to %u\n", - pages, (unsigned int)size); - pages = size; + pages, max_pages); + pages = max_pages; } t_info_words = num_online_cpus() * pages * sizeof(uint32_t); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Mar-30 18:04 UTC
[Xen-devel] [PATCH 3 of 3] xentrace: remove unneeded debug printk
# HG changeset patch # User Olaf Hering <olaf@aepfle.de> # Date 1301424082 -7200 # Node ID eea91e2e4b452cb573982f4b47ecfa4c656e9547 # Parent 3e95e737bc51c2295926e4f23389b1cb161d6d7b xentrace: remove unneeded debug printk The pointer value in case of an allocation failure is rather uninteresting. Signed-off-by: Olaf Hering <olaf@aepfle.de> diff -r 3e95e737bc51 -r eea91e2e4b45 xen/common/trace.c --- a/xen/common/trace.c Tue Mar 29 20:41:15 2011 +0200 +++ b/xen/common/trace.c Tue Mar 29 20:41:22 2011 +0200 @@ -239,7 +239,6 @@ { void *rawbuf = per_cpu(t_bufs, cpu); per_cpu(t_bufs, cpu) = NULL; - printk(XENLOG_DEBUG "xentrace: cpu %d p %p\n", cpu, rawbuf); if ( rawbuf ) { ASSERT(!(virt_to_page(rawbuf)->count_info & PGC_allocated)); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
George Dunlap
2011-Apr-01 10:32 UTC
[Xen-devel] Re: [PATCH 1 of 3] xentrace: correct formula to calculate t_info_pages
Acked-by: George Dunlap <george.dunlap@eu.citrix.com> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote:> # HG changeset patch > # User Olaf Hering <olaf@aepfle.de> > # Date 1301423840 -7200 > # Node ID 8a2ce5e49b2c5f2e013734b5d53eae37572f4101 > # Parent 45eeeb6d0481efaab2a59941e1b8e061aead37d4 > xentrace: correct formula to calculate t_info_pages > > The current formula to calculate t_info_pages, based on the initial > code, is slightly incorrect. It may allocate more than needed. > Each cpu has some pages/mfns stored as uint32_t. > That list is stored with an offset at tinfo. > > Signed-off-by: Olaf Hering <olaf@aepfle.de> > > --- > xen/common/trace.c | 7 +++---- > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff -r 45eeeb6d0481 -r 8a2ce5e49b2c xen/common/trace.c > --- a/xen/common/trace.c Tue Mar 29 16:34:01 2011 +0100 > +++ b/xen/common/trace.c Tue Mar 29 20:37:20 2011 +0200 > @@ -110,7 +110,7 @@ > { > struct t_buf dummy; > typeof(dummy.prod) size; > - unsigned int t_info_words, t_info_bytes; > + unsigned int t_info_words; > > /* force maximum value for an unsigned type */ > size = -1; > @@ -125,9 +125,8 @@ > pages = size; > } > > - t_info_words = num_online_cpus() * pages + t_info_first_offset; > - t_info_bytes = t_info_words * sizeof(uint32_t); > - t_info_pages = PFN_UP(t_info_bytes); > + t_info_words = num_online_cpus() * pages * sizeof(uint32_t); > + t_info_pages = PFN_UP(t_info_first_offset + t_info_words); > printk(XENLOG_INFO "xentrace: requesting %u t_info pages " > "for %u trace pages on %u cpus\n", > t_info_pages, pages, num_online_cpus());_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
George Dunlap
2011-Apr-01 11:18 UTC
[Xen-devel] Re: [PATCH 2 of 3] xentrace: use tbuf_size for overflow check
On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote:> # HG changeset patch > # User Olaf Hering <olaf@aepfle.de> > # Date 1301424075 -7200 > # Node ID 3e95e737bc51c2295926e4f23389b1cb161d6d7b > # Parent 8a2ce5e49b2c5f2e013734b5d53eae37572f4101 > xentrace: use tbuf_size for overflow check > > The calculated number of per-cpu trace pages is stored in t_info and > shared with tools like xentrace. Since its an u16 the value may overflow > because the current check is based on u32.Hmm -- while this is true, it''s possible this may change in the future. If we ever changed t_info.tbuf_size to be u32, then t_buf.prod/cons would again be the limiting factor. Should we perhaps add both checks?> Using the u16 means each cpu could in theory use up to 256MB as trace > buffer. However such a large allocation will currently fail on x86 due > to the MAX_ORDER limit.FWIW, I don''t believe that there''s any reason the allocations have to be contiguous any more. I kept them contiguous to minimize the changes to the moving parts near a release. But the new system has been pretty well tested now, so I think looking at non-contiguous allocations may be worthwhile.> > Signed-off-by: Olaf Hering <olaf@aepfle.de> > > diff -r 8a2ce5e49b2c -r 3e95e737bc51 xen/common/trace.c > --- a/xen/common/trace.c Tue Mar 29 20:37:20 2011 +0200 > +++ b/xen/common/trace.c Tue Mar 29 20:41:15 2011 +0200 > @@ -104,25 +104,26 @@ > * calculate_tbuf_size - check to make sure that the proposed size will fit > * in the currently sized struct t_info and allows prod and cons to > * reach double the value without overflow. > + * tbuf_size is u16, while t_buf prod/cons are u32, so using tbuf_size for > + * overflow check is good. > + * The t_info layout is fixed and cant be changed without breaking xentrace. > * Initialize t_info_pages based on number of trace pages. > */ > static int calculate_tbuf_size(unsigned int pages) > { > - struct t_buf dummy; > - typeof(dummy.prod) size; > + struct t_info dummy; > + typeof(dummy.tbuf_size) max_pages; > unsigned int t_info_words; > > /* force maximum value for an unsigned type */ > - size = -1; > + max_pages = -1; > > - /* max size holds up to n pages */ > - size /= PAGE_SIZE; > - if ( pages > size ) > + if ( pages > max_pages ) > { > printk(XENLOG_INFO "xentrace: requested number of %u pages " > "reduced to %u\n", > - pages, (unsigned int)size); > - pages = size; > + pages, max_pages); > + pages = max_pages; > } > > t_info_words = num_online_cpus() * pages * sizeof(uint32_t);_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
George Dunlap
2011-Apr-01 11:18 UTC
[Xen-devel] Re: [PATCH 3 of 3] xentrace: remove unneeded debug printk
Acked-by: George Dunlap <george.dunlap@eu.citrix.com> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote:> # HG changeset patch > # User Olaf Hering <olaf@aepfle.de> > # Date 1301424082 -7200 > # Node ID eea91e2e4b452cb573982f4b47ecfa4c656e9547 > # Parent 3e95e737bc51c2295926e4f23389b1cb161d6d7b > xentrace: remove unneeded debug printk > > The pointer value in case of an allocation failure is rather uninteresting. > > Signed-off-by: Olaf Hering <olaf@aepfle.de> > > diff -r 3e95e737bc51 -r eea91e2e4b45 xen/common/trace.c > --- a/xen/common/trace.c Tue Mar 29 20:41:15 2011 +0200 > +++ b/xen/common/trace.c Tue Mar 29 20:41:22 2011 +0200 > @@ -239,7 +239,6 @@ > { > void *rawbuf = per_cpu(t_bufs, cpu); > per_cpu(t_bufs, cpu) = NULL; > - printk(XENLOG_DEBUG "xentrace: cpu %d p %p\n", cpu, rawbuf); > if ( rawbuf ) > { > ASSERT(!(virt_to_page(rawbuf)->count_info & PGC_allocated));_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Apr-05 10:19 UTC
[Xen-devel] Re: [PATCH 2 of 3] xentrace: use tbuf_size for overflow check
On Fri, Apr 01, George Dunlap wrote:> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote:> > xentrace: use tbuf_size for overflow check > > > > The calculated number of per-cpu trace pages is stored in t_info and > > shared with tools like xentrace. Since its an u16 the value may overflow > > because the current check is based on u32. > > Hmm -- while this is true, it''s possible this may change in the future. > If we ever changed t_info.tbuf_size to be u32, then t_buf.prod/cons > would again be the limiting factor. > > Should we perhaps add both checks?I will update the patch to check for both.> > Using the u16 means each cpu could in theory use up to 256MB as trace > > buffer. However such a large allocation will currently fail on x86 due > > to the MAX_ORDER limit. > > FWIW, I don''t believe that there''s any reason the allocations have to be > contiguous any more. I kept them contiguous to minimize the changes to > the moving parts near a release. But the new system has been pretty > well tested now, so I think looking at non-contiguous allocations may be > worthwhile.This will be a bigger change I think. Added to my todo list. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Apr-07 13:50 UTC
[Xen-devel] Re: [PATCH 2 of 3] xentrace: use tbuf_size for overflow check
On Fri, Apr 01, George Dunlap wrote:> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote: > > # HG changeset patch > > # User Olaf Hering <olaf@aepfle.de> > > # Date 1301424075 -7200 > > # Node ID 3e95e737bc51c2295926e4f23389b1cb161d6d7b > > # Parent 8a2ce5e49b2c5f2e013734b5d53eae37572f4101 > > xentrace: use tbuf_size for overflow check > > > > The calculated number of per-cpu trace pages is stored in t_info and > > shared with tools like xentrace. Since its an u16 the value may overflow > > because the current check is based on u32. > > Hmm -- while this is true, it''s possible this may change in the future. > If we ever changed t_info.tbuf_size to be u32, then t_buf.prod/cons > would again be the limiting factor. > > Should we perhaps add both checks?I will test the change below. diff -r 2b66b83b19b6 xen/common/trace.c --- a/xen/common/trace.c Thu Apr 07 12:13:58 2011 +0100 +++ b/xen/common/trace.c Thu Apr 07 15:45:21 2011 +0200 @@ -104,25 +104,33 @@ * calculate_tbuf_size - check to make sure that the proposed size will fit * in the currently sized struct t_info and allows prod and cons to * reach double the value without overflow. + * The t_info layout is fixed and cant be changed without breaking xentrace. * Initialize t_info_pages based on number of trace pages. */ static int calculate_tbuf_size(unsigned int pages) { - struct t_buf dummy; - typeof(dummy.prod) size; + struct t_buf dummy_size; + typeof(dummy_size.prod) max_size; + struct t_info dummy_pages; + typeof(dummy_pages.tbuf_size) max_pages; unsigned int t_info_words; /* force maximum value for an unsigned type */ - size = -1; + max_size = -1; + max_pages = -1; /* max size holds up to n pages */ - size /= PAGE_SIZE; - if ( pages > size ) + max_size /= PAGE_SIZE; + + if ( max_size < max_pages ) + max_pages = max_size; + + if ( pages > max_pages ) { printk(XENLOG_INFO "xentrace: requested number of %u pages " "reduced to %u\n", - pages, (unsigned int)size); - pages = size; + pages, max_pages); + pages = max_pages; } t_info_words = num_online_cpus() * pages * sizeof(uint32_t); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Fri, Apr 01, George Dunlap wrote:> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote: > > Using the u16 means each cpu could in theory use up to 256MB as trace > > buffer. However such a large allocation will currently fail on x86 due > > to the MAX_ORDER limit. > > FWIW, I don''t believe that there''s any reason the allocations have to be > contiguous any more. I kept them contiguous to minimize the changes to > the moving parts near a release. But the new system has been pretty > well tested now, so I think looking at non-contiguous allocations may be > worthwhile.George, how do I allocate a few mfns and give them a virtual address? I dont find a malloc like interface to allocate random pages. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 18.04.11 at 20:45, Olaf Hering <olaf@aepfle.de> wrote: > On Fri, Apr 01, George Dunlap wrote: > >> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote: >> > Using the u16 means each cpu could in theory use up to 256MB as trace >> > buffer. However such a large allocation will currently fail on x86 due >> > to the MAX_ORDER limit. >> >> FWIW, I don''t believe that there''s any reason the allocations have to be >> contiguous any more. I kept them contiguous to minimize the changes to >> the moving parts near a release. But the new system has been pretty >> well tested now, so I think looking at non-contiguous allocations may be >> worthwhile. > > how do I allocate a few mfns and give them a virtual address? > I dont find a malloc like interface to allocate random pages.alloc_domheap_pages() followed by map_pages_to_xen() would be one way, if you pre-reserve virtual address space (suitable hence only for 64-bit and only if you can set a reasonable upper bound on the amount to want to map). Otherwise I think the only option is to introduce indirection (using the 1:1 mapping, and setting up an array of pointers). That may however be a little difficult if (and I think that''s the case) data chunks aren''t always of the same size (as then you need to deal with the roll-over into the next page). Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Apr 26, Jan Beulich wrote:> >>> On 18.04.11 at 20:45, Olaf Hering <olaf@aepfle.de> wrote: > > On Fri, Apr 01, George Dunlap wrote: > > > >> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote: > >> > Using the u16 means each cpu could in theory use up to 256MB as trace > >> > buffer. However such a large allocation will currently fail on x86 due > >> > to the MAX_ORDER limit. > >> > >> FWIW, I don''t believe that there''s any reason the allocations have to be > >> contiguous any more. I kept them contiguous to minimize the changes to > >> the moving parts near a release. But the new system has been pretty > >> well tested now, so I think looking at non-contiguous allocations may be > >> worthwhile. > > > > how do I allocate a few mfns and give them a virtual address? > > I dont find a malloc like interface to allocate random pages.> Otherwise I think the only option is to introduce indirection (using > the 1:1 mapping, and setting up an array of pointers). That may > however be a little difficult if (and I think that''s the case) data > chunks aren''t always of the same size (as then you need to deal > with the roll-over into the next page).I''m almost done with the per-page handling in __insert_record(). I just need to figure out the a usable address of a given mfn. Is the u8 *p = mfn_to_virt(mfn) the same as page_to_virt(mfn_to_page(mfn))? Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 06.05.11 at 12:25, Olaf Hering <olaf@aepfle.de> wrote: > On Tue, Apr 26, Jan Beulich wrote: > >> >>> On 18.04.11 at 20:45, Olaf Hering <olaf@aepfle.de> wrote: >> > On Fri, Apr 01, George Dunlap wrote: >> > >> >> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote: >> >> > Using the u16 means each cpu could in theory use up to 256MB as trace >> >> > buffer. However such a large allocation will currently fail on x86 due >> >> > to the MAX_ORDER limit. >> >> >> >> FWIW, I don''t believe that there''s any reason the allocations have to be >> >> contiguous any more. I kept them contiguous to minimize the changes to >> >> the moving parts near a release. But the new system has been pretty >> >> well tested now, so I think looking at non-contiguous allocations may be >> >> worthwhile. >> > >> > how do I allocate a few mfns and give them a virtual address? >> > I dont find a malloc like interface to allocate random pages. > >> Otherwise I think the only option is to introduce indirection (using >> the 1:1 mapping, and setting up an array of pointers). That may >> however be a little difficult if (and I think that''s the case) data >> chunks aren''t always of the same size (as then you need to deal >> with the roll-over into the next page). > > I''m almost done with the per-page handling in __insert_record(). > I just need to figure out the a usable address of a given mfn. > Is the u8 *p = mfn_to_virt(mfn) the same as page_to_virt(mfn_to_page(mfn))?Yes. Jan> > Olaf_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, Apr 18, Olaf Hering wrote:> On Fri, Apr 01, George Dunlap wrote: > > > On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote: > > > Using the u16 means each cpu could in theory use up to 256MB as trace > > > buffer. However such a large allocation will currently fail on x86 due > > > to the MAX_ORDER limit. > > > > FWIW, I don''t believe that there''s any reason the allocations have to be > > contiguous any more. I kept them contiguous to minimize the changes to > > the moving parts near a release. But the new system has been pretty > > well tested now, so I think looking at non-contiguous allocations may be > > worthwhile.Is there a way to allocate more than 128mb with repeated calls to alloc_xenheap_page()? From which pool should the per-cpu tracebuffers get allocated? alloc_domheap_page() wants a domain, so I think thats the wrong interface. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 06/05/2011 19:12, "Olaf Hering" <olaf@aepfle.de> wrote:> On Mon, Apr 18, Olaf Hering wrote: > >> On Fri, Apr 01, George Dunlap wrote: >> >>> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote: >>>> Using the u16 means each cpu could in theory use up to 256MB as trace >>>> buffer. However such a large allocation will currently fail on x86 due >>>> to the MAX_ORDER limit. >>> >>> FWIW, I don''t believe that there''s any reason the allocations have to be >>> contiguous any more. I kept them contiguous to minimize the changes to >>> the moving parts near a release. But the new system has been pretty >>> well tested now, so I think looking at non-contiguous allocations may be >>> worthwhile. > > Is there a way to allocate more than 128mb with repeated calls to > alloc_xenheap_page()?Yes it should just work. Are you sure you actually have more than 128MB available (not all allocated to dom0 for example)?> From which pool should the per-cpu tracebuffers > get allocated? alloc_domheap_page() wants a domain, so I think thats > the wrong interface.Yes, sticking with alloc_xenheap_pages() is good. -- Keir> Olaf > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Fri, May 06, Keir Fraser wrote:> On 06/05/2011 19:12, "Olaf Hering" <olaf@aepfle.de> wrote:> > Is there a way to allocate more than 128mb with repeated calls to > > alloc_xenheap_page()? > > Yes it should just work. Are you sure you actually have more than 128MB > available (not all allocated to dom0 for example)?Thanks Keir. xm mem-set 0 1024 did the trick. I was under the impression dom0 would automatically get ballooned down to make room for the allocation requests. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 07/05/2011 09:39, "Olaf Hering" <olaf@aepfle.de> wrote:> On Fri, May 06, Keir Fraser wrote: > >> On 06/05/2011 19:12, "Olaf Hering" <olaf@aepfle.de> wrote: > >>> Is there a way to allocate more than 128mb with repeated calls to >>> alloc_xenheap_page()? >> >> Yes it should just work. Are you sure you actually have more than 128MB >> available (not all allocated to dom0 for example)? > > Thanks Keir. xm mem-set 0 1024 did the trick. > I was under the impression dom0 would automatically get ballooned down > to make room for the allocation requests.Only when creating new domains in xend, then xend will balloon down dom0. So no, not in this case: we can hardly go run xend while we''re in hypervisor context. ;-) This kind of thing is why we normally run dom0 with ''just enough'' memory and don''t use auto ballooning. -- Keir> Olaf_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 06.05.11 at 20:46, Keir Fraser <keir.xen@gmail.com> wrote: > On 06/05/2011 19:12, "Olaf Hering" <olaf@aepfle.de> wrote: > >> On Mon, Apr 18, Olaf Hering wrote: >> >>> On Fri, Apr 01, George Dunlap wrote: >>> >>>> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote: >>>>> Using the u16 means each cpu could in theory use up to 256MB as trace >>>>> buffer. However such a large allocation will currently fail on x86 due >>>>> to the MAX_ORDER limit. >>>> >>>> FWIW, I don''t believe that there''s any reason the allocations have to be >>>> contiguous any more. I kept them contiguous to minimize the changes to >>>> the moving parts near a release. But the new system has been pretty >>>> well tested now, so I think looking at non-contiguous allocations may be >>>> worthwhile. >> >> Is there a way to allocate more than 128mb with repeated calls to >> alloc_xenheap_page()? > > Yes it should just work. Are you sure you actually have more than 128MB > available (not all allocated to dom0 for example)? > > >> From which pool should the per-cpu tracebuffers >> get allocated? alloc_domheap_page() wants a domain, so I think thats >> the wrong interface. > > Yes, sticking with alloc_xenheap_pages() is good.It really depends on whether you expect to get memory that has (even on 32-bit) a virtual mapping, or you want to map it at an arbitrary virtual address after wards. alloc_xenheap_pages() gives you mapped memory (and the amount you can get is rather limited on 32-bit), while alloc_domheap_pages(NULL, ...) gives you memory that has a mapping only on 64-bit (and, once we''ll find it necessary to support machines with more than 5Tb, even that may not hold anymore) but it equally not associated with any domain. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 09/05/2011 09:30, "Jan Beulich" <JBeulich@novell.com> wrote:>> Yes, sticking with alloc_xenheap_pages() is good. > > It really depends on whether you expect to get memory that has > (even on 32-bit) a virtual mapping, or you want to map it at an > arbitrary virtual address after wards. alloc_xenheap_pages() gives > you mapped memory (and the amount you can get is rather limited > on 32-bit), while alloc_domheap_pages(NULL, ...) gives you > memory that has a mapping only on 64-bit (and, once we''ll find it > necessary to support machines with more than 5Tb, even that > may not hold anymore) but it equally not associated with any > domain.We have a mechanism for sharing xenheap pages with a guest, which xentrace is already using. Doing the same with anonymous domheap pages would be extra hassle. The limitation of xenheap on x86_32 is uninteresting to me, especially when we''re talking about a niche developer feature like xentrace. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Fri, May 06, Keir Fraser wrote:> On 06/05/2011 19:12, "Olaf Hering" <olaf@aepfle.de> wrote: > > From which pool should the per-cpu tracebuffers > > get allocated? alloc_domheap_page() wants a domain, so I think thats > > the wrong interface. > > Yes, sticking with alloc_xenheap_pages() is good.Another question: alloc_trace_bufs() calls alloc_xenheap_pages(0, MEMF_bits(32 + PAGE_SHIFT)); MEMF_bits() is not used in the i386 codepath in alloc_heap_pages(), unless I miss something. Otherwise alloc_domheap_pages() is called, which passes BITS_PER_LONG instead of 32 to domain_clamp_alloc_bitsize(). Is the hardcoded 32+PAGE_SHIFT required in some way, or could I just use the alloc_xenheap_page() macro and let alloc_domheap_pages() deal with allocation details? Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 09/05/2011 13:43, "Olaf Hering" <olaf@aepfle.de> wrote:>> Yes, sticking with alloc_xenheap_pages() is good. > > Another question: > alloc_trace_bufs() calls alloc_xenheap_pages(0, MEMF_bits(32 + PAGE_SHIFT)); > > MEMF_bits() is not used in the i386 codepath in alloc_heap_pages(), > unless I miss something.On i386 the xenheap is drawn from the bottom 12MB of physical memory, hence restricting address width doesn''t need to be explicitly handled -- no caller requires addresses below 8MB.> Otherwise alloc_domheap_pages() is called, which passes BITS_PER_LONG > instead of 32 to domain_clamp_alloc_bitsize(). > > Is the hardcoded 32+PAGE_SHIFT required in some way,The allocated MFNs get passed to dom0 userspace in a uint32_t array. Hence MFNs cannot be wider than 32 bits. Hence physical addresses of trace pages cannot be wider than 32+PAGE_SHIFT bits.> or could I just use > the alloc_xenheap_page() macro and let alloc_domheap_pages() deal with > allocation details?No, the explicit MEMF_bits restriction is required, unless you widen the MFN arrays passed to dom0 userspace to contain uint64_t entries. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel