Tristan Gingold
2006-Apr-21 07:24 UTC
[Xen-ia64-devel] flush_tlb_mask and grant_table on ia64
Hi, on IA64 flushing the whole TLB is very expensive: this is a cpu tlb flush and clearing 16MB of memory (virtual tlb). However, flushing an address range is rather cheap. Flushing an address range on every processors is also cheap (no IPI). Unfortunatly Xen common code flushes the whole TLB after unmapping grant reference. Currently, this is not done on IA64 because domain_dirty_cpumask is never set (bug!). We can flush TLB by range within destroy_grant_host_mapping. But then we need to disable the flush_tlb_mask call. What is the best solution? Thank you for comments, Tristan. _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Xu, Anthony
2006-Apr-21 07:27 UTC
[Xen-devel] RE: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64
>From: xen-ia64-devel-bounces@lists.xensource.com >[mailto:xen-ia64-devel-bounces@lists.xensource.com] On Behalf Of Tristan >Gingold >Sent: 2006?4?21? 15:24 >To: xen-devel@lists.xensource.com; xen-ia64-devel@lists.xensource.com >Subject: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64 > >Hi, > >on IA64 flushing the whole TLB is very expensive: this is a cpu tlb flush and >clearing 16MB of memory (virtual tlb). >However, flushing an address range is rather cheap. Flushing an address range >on every processors is also cheap (no IPI). > >Unfortunatly Xen common code flushes the whole TLB after unmapping grant >reference. >Agreed>Currently, this is not done on IA64 because domain_dirty_cpumask is never set >(bug!). > >We can flush TLB by range within destroy_grant_host_mapping. But then we need >to disable the flush_tlb_mask call. > >What is the best solution? >It depends on the coverage of VHPT and coverage of purged page. Linux kernel also use this, If coverage of purged page is less than a fixed value, flush TLB by range. If coverage of purged page is larger than a fixed value, Flush the whole TLB.>Thank you for comments, >Tristan. > > > > >_______________________________________________ >Xen-ia64-devel mailing list >Xen-ia64-devel@lists.xensource.com >http://lists.xensource.com/xen-ia64-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tristan Gingold
2006-Apr-21 07:42 UTC
Re: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64
Le Vendredi 21 Avril 2006 09:27, Xu, Anthony a écrit :> From: xen-ia64-devel-bounces@lists.xensource.com > > >[mailto:xen-ia64-devel-bounces@lists.xensource.com] On Behalf Of Tristan > >Gingold > >Sent: 2006?4?21? 15:24 > >To: xen-devel@lists.xensource.com; xen-ia64-devel@lists.xensource.com > >Subject: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64 > > > >Hi, > > > >on IA64 flushing the whole TLB is very expensive: this is a cpu tlb flush > > and clearing 16MB of memory (virtual tlb). > >However, flushing an address range is rather cheap. Flushing an address > > range on every processors is also cheap (no IPI). > > > >Unfortunatly Xen common code flushes the whole TLB after unmapping grant > >reference. > > Agreed > > >Currently, this is not done on IA64 because domain_dirty_cpumask is never > > set (bug!). > > > >We can flush TLB by range within destroy_grant_host_mapping. But then we > > need to disable the flush_tlb_mask call. > > > >What is the best solution? > > It depends on the coverage of VHPT and coverage of purged page.From my point of view, the problem is not the number of frames to be purge. I suppose only a few pages are unmapped per unmap_grant_ref call (although I may be wrong here). From my point of view the problem is how to make Xen common code more arch neutral. Tristan. _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Hollis Blanchard
2006-Apr-21 21:15 UTC
Re: [Xen-devel] Re: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64
On Fri, 2006-04-21 at 09:42 +0200, Tristan Gingold wrote:> Le Vendredi 21 Avril 2006 09:27, Xu, Anthony a écrit : > > From: xen-ia64-devel-bounces@lists.xensource.com > > > > >[mailto:xen-ia64-devel-bounces@lists.xensource.com] On Behalf Of Tristan > > >Gingold > > >Sent: 2006?4?21? 15:24 > > >To: xen-devel@lists.xensource.com; xen-ia64-devel@lists.xensource.com > > >Subject: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64 > > > > > >Hi, > > > > > >on IA64 flushing the whole TLB is very expensive: this is a cpu tlb flush > > > and clearing 16MB of memory (virtual tlb). > > >However, flushing an address range is rather cheap. Flushing an address > > > range on every processors is also cheap (no IPI). > > > > > >Unfortunatly Xen common code flushes the whole TLB after unmapping grant > > >reference. > > > > Agreed > > > > >Currently, this is not done on IA64 because domain_dirty_cpumask is never > > > set (bug!). > > > > > >We can flush TLB by range within destroy_grant_host_mapping. But then we > > > need to disable the flush_tlb_mask call. > > > > > >What is the best solution? > > > > It depends on the coverage of VHPT and coverage of purged page. > From my point of view, the problem is not the number of frames to be purge. I > suppose only a few pages are unmapped per unmap_grant_ref call (although I > may be wrong here). > > From my point of view the problem is how to make Xen common code more arch > neutral.I think the obvious solution would be to provide more information to the arch code, e.g. flush_grant_ref(gnttab_unmap_grant_ref_t *ref) or maybe flush_tlb_range(ulong maddr, ulong len) x86 could ignore this extra data and simply flush the whole TLB as it does now. -- Hollis Blanchard IBM Linux Technology Center _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tristan Gingold
2006-Apr-24 11:37 UTC
Re: [Xen-devel] Re: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64
Le Vendredi 21 Avril 2006 23:15, Hollis Blanchard a écrit :> On Fri, 2006-04-21 at 09:42 +0200, Tristan Gingold wrote: > > Le Vendredi 21 Avril 2006 09:27, Xu, Anthony a écrit : > > > From: xen-ia64-devel-bounces@lists.xensource.com > > > > > > >[mailto:xen-ia64-devel-bounces@lists.xensource.com] On Behalf Of > > > > Tristan Gingold > > > >Sent: 2006?4?21? 15:24 > > > >To: xen-devel@lists.xensource.com; xen-ia64-devel@lists.xensource.com > > > >Subject: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64 > > > > > > > >Hi, > > > > > > > >on IA64 flushing the whole TLB is very expensive: this is a cpu tlb > > > > flush and clearing 16MB of memory (virtual tlb). > > > >However, flushing an address range is rather cheap. Flushing an > > > > address range on every processors is also cheap (no IPI). > > > > > > > >Unfortunatly Xen common code flushes the whole TLB after unmapping > > > > grant reference. > > > > > > Agreed > > > > > > >Currently, this is not done on IA64 because domain_dirty_cpumask is > > > > never set (bug!). > > > > > > > >We can flush TLB by range within destroy_grant_host_mapping. But then > > > > we need to disable the flush_tlb_mask call. > > > > > > > >What is the best solution? > > > > > > It depends on the coverage of VHPT and coverage of purged page. > > > > From my point of view, the problem is not the number of frames to be > > purge. I suppose only a few pages are unmapped per unmap_grant_ref call > > (although I may be wrong here). > > > > From my point of view the problem is how to make Xen common code more > > arch neutral. > > I think the obvious solution would be to provide more information to the > arch code, e.g. > flush_grant_ref(gnttab_unmap_grant_ref_t *ref) > or maybe > flush_tlb_range(ulong maddr, ulong len) > > x86 could ignore this extra data and simply flush the whole TLB as it > does now.Yes, I am going this way. However this is not so simple: we want to call flush_tlb_range for every page but flush_tlb_all only once... Tristan. _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel