Our driver maintains it''s own set of bounce buffers rather than relying on swiotlb. Some context info could be accessed from this thread in the netdev mailing list - http://marc.info/?l=linux-netdev&m=116430007806101&w=2 Given that we prefer to maintain our own bounce buffers, I could neither find a dependable API that let the driver allocate machine contiguous memory (that could later be *pci_map_singled*) nor find a suitable hook to xen_create_contiguous_region. As an aside, is there a tweak to "increase" contiguous memory regions available so that xen_create_contiguous_regions() succeeds. On DomUs, when swiotlb is enabled, I see some crashes which I would like to avoid by redistributing memory resources. Jambunathan K. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Apr-06 11:07 UTC
Re: [Xen-devel] xen_create_contiguous_region - Regarding
On 6/4/07 11:46, "Jambunathan K" <jambunathan@netxen.com> wrote:> Given that we prefer to maintain our own bounce buffers, I could neither > find a dependable API that let the driver allocate machine contiguous > memory (that could later be *pci_map_singled*) nor find a suitable hook > to xen_create_contiguous_region.pci_map_single() will do what you want automatically. If you really want to allocate contiguous memory yourself, allocate it then call xen_create_contiguous_region().> As an aside, is there a tweak to "increase" contiguous memory regions > available so that xen_create_contiguous_regions() succeeds. On DomUs, > when swiotlb is enabled, I see some crashes which I would like to avoid > by redistributing memory resources.Xen will try to maintain contiguity as far as possible, but it cannot defragment the memory map. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jambunathan K
2007-Apr-06 12:54 UTC
Re: [Xen-devel] xen_create_contiguous_region - Regarding
>> pci_map_single() will do what you want automatically. If you really want to >> allocate contiguous memory yourself, allocate it then call >> xen_create_contiguous_region().Could you please export xen_create_contiguous_region for wider use? Jambunathan K. Keir Fraser wrote:> On 6/4/07 11:46, "Jambunathan K" <jambunathan@netxen.com> wrote: > >> Given that we prefer to maintain our own bounce buffers, I could neither >> find a dependable API that let the driver allocate machine contiguous >> memory (that could later be *pci_map_singled*) nor find a suitable hook >> to xen_create_contiguous_region. > > pci_map_single() will do what you want automatically. If you really want to > allocate contiguous memory yourself, allocate it then call > xen_create_contiguous_region(). > >> As an aside, is there a tweak to "increase" contiguous memory regions >> available so that xen_create_contiguous_regions() succeeds. On DomUs, >> when swiotlb is enabled, I see some crashes which I would like to avoid >> by redistributing memory resources. > > Xen will try to maintain contiguity as far as possible, but it cannot > defragment the memory map. > > -- Keir > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jambunathan K
2007-Apr-09 13:17 UTC
Re: [Xen-devel] xen_create_contiguous_region - Regarding
Given that our driver supports 35-bit mask, the fact that xen_create_contiguous_region constrains "machine contiguous" allocations to less than dma_bits seems very limiting. What would be the recommended way to get machine contiguous pages within say (1<<35)? Regards, Jambunathan K. Jambunathan K wrote:>>> pci_map_single() will do what you want automatically. If you really want to >>> allocate contiguous memory yourself, allocate it then call >>> xen_create_contiguous_region(). > > Could you please export xen_create_contiguous_region for wider use? > > Jambunathan K. > > > Keir Fraser wrote: >> On 6/4/07 11:46, "Jambunathan K" <jambunathan@netxen.com> wrote: >> >>> Given that we prefer to maintain our own bounce buffers, I could neither >>> find a dependable API that let the driver allocate machine contiguous >>> memory (that could later be *pci_map_singled*) nor find a suitable hook >>> to xen_create_contiguous_region. >> pci_map_single() will do what you want automatically. If you really want to >> allocate contiguous memory yourself, allocate it then call >> xen_create_contiguous_region(). >> >>> As an aside, is there a tweak to "increase" contiguous memory regions >>> available so that xen_create_contiguous_regions() succeeds. On DomUs, >>> when swiotlb is enabled, I see some crashes which I would like to avoid >>> by redistributing memory resources. >> Xen will try to maintain contiguity as far as possible, but it cannot >> defragment the memory map. >> >> -- Keir >> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >> > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Apr-09 13:24 UTC
Re: [Xen-devel] xen_create_contiguous_region - Regarding
On 9/4/07 14:17, "Jambunathan K" <jambunathan@netxen.com> wrote:> Given that our driver supports 35-bit mask, the fact that > xen_create_contiguous_region constrains "machine contiguous" allocations > to less than dma_bits seems very limiting. > > What would be the recommended way to get machine contiguous pages within > say (1<<35)?The address width is a parameter to xen_create_contiguous_region(). -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jambunathan K
2007-Apr-10 06:25 UTC
Re: [Xen-devel] xen_create_contiguous_region - Regarding
Keir>>> Given that our driver supports 35-bit mask, the fact that >>> xen_create_contiguous_region constrains "machine contiguous" allocations >>> to less than dma_bits seems very limiting. >>> >>> What would be the recommended way to get machine contiguous pages within >>> say (1<<35)? >> >> The address width is a parameter to xen_create_contiguous_region(). >>Let me explain what I meant. xen_create_contiguous_region (as in Xen-3.0.4) can be instructed to make 2 kinds of allocations - from either MEMZONE_DOM or MEMZONE_DMADOM. Two zones are delineated by max_dma_mfn as dictated by dma_size. A "MEMF_dma" request to __alloc_domheap_pages is assured to be satisfied from the MEMZONE_DMADOM. The role of address_bits apparently stops at discreetly choosing between one of the above 2 zones. I am of the understanding that xen_create_contiguous_region() *cannot* assure allocations say within (1<<35) (and desirably from outside of MEMZONE_DMADOM) In memory_exchange, if ( (exch.out.address_bits != 0) && (exch.out.address_bits < (get_order_from_pages(max_page) + PAGE_SHIFT)) ) { if ( exch.out.address_bits < dma_bitsize ) { rc = -ENOMEM; goto fail_early; } memflags = MEMF_dma; } the above code snippet requires that address_bits be atleast dma_bitsize in which case it flags the alloc request as from MEMZONE_DMADOM. This seems a bit counter intuitive to me. Is not address_bits a mandated "spec" on the output extent. In essence I have the following requests wrt xen_create_contiguous_region(): 1) Export it. 2) Have it honor address_bits spec. I can try my hand at submitting a patch if I have an in principle nod. Thanks, Jambunathan K. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Apr-10 09:17 UTC
Re: [Xen-devel] xen_create_contiguous_region - Regarding
On 10/4/07 07:25, "Jambunathan K" <jambunathan@netxen.com> wrote:> xen_create_contiguous_region (as in Xen-3.0.4) can be instructed to make > 2 kinds of allocations - from either MEMZONE_DOM or MEMZONE_DMADOM. Two > zones are delineated by max_dma_mfn as dictated by dma_size. > > A "MEMF_dma" request to __alloc_domheap_pages is assured to be satisfied > from the MEMZONE_DMADOM. > > The role of address_bits apparently stops at discreetly choosing between > one of the above 2 zones.This is an implementation detail inside the hypervisor. The fact that Xen 3.0.4 actually doesn''t track each separate bit-width of memory separately should not affect your use of the guest memory-allocation interfaces. The fact that when you try to allocate 35-bit memory you actually are limited to 31-bit memory is simply a limitation you''ll have to work with for 3.0.4. Xen 3.0.5 will not have this limitation. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel