search for: flush_tlb_mask

Displaying 20 results from an estimated 23 matches for "flush_tlb_mask".

2006 Apr 21
4
[Xen-ia64-devel] flush_tlb_mask and grant_table on ia64
...ssors is also cheap (no IPI). Unfortunatly Xen common code flushes the whole TLB after unmapping grant reference. Currently, this is not done on IA64 because domain_dirty_cpumask is never set (bug!). We can flush TLB by range within destroy_grant_host_mapping. But then we need to disable the flush_tlb_mask call. What is the best solution? Thank you for comments, Tristan. _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
2019 Jun 13
4
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...DOWN); } -static bool tlb_is_not_lazy(int cpu, void *data) +static inline bool tlb_is_not_lazy(int cpu) { return !per_cpu(cpu_tlbstate.is_lazy, cpu); } -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); + +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * Do accounting and tracing. Note that there are (and have always been) + * cases in which a remote TLB flush will be traced, but eventually + * would not happen. + */ count_vm...
2019 Jun 13
4
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...DOWN); } -static bool tlb_is_not_lazy(int cpu, void *data) +static inline bool tlb_is_not_lazy(int cpu) { return !per_cpu(cpu_tlbstate.is_lazy, cpu); } -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); + +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * Do accounting and tracing. Note that there are (and have always been) + * cases in which a remote TLB flush will be traced, but eventually + * would not happen. + */ count_vm...
2019 Jun 25
0
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...hese kinds of patches, I'd resist the urge to do these kinds of tweaks, especially since it starts to hide the important change on the line. > -void native_flush_tlb_others(const struct cpumask *cpumask, > - const struct flush_tlb_info *info) > +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); > + > +void native_flush_tlb_multi(const struct cpumask *cpumask, > + const struct flush_tlb_info *info) > { > + /* > + * Do accounting and tracing. Note that there are (and have always been) > + * cases in which a remote TLB flush will be traced, but eventually >...
2019 Jun 26
2
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...st the urge to do these kinds of tweaks, > especially since it starts to hide the important change on the line. Of course. > >> -void native_flush_tlb_others(const struct cpumask *cpumask, >> - const struct flush_tlb_info *info) >> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); >> + >> +void native_flush_tlb_multi(const struct cpumask *cpumask, >> + const struct flush_tlb_info *info) >> { >> + /* >> + * Do accounting and tracing. Note that there are (and have always been) >> + * cases in which a remote TLB flush will be...
2019 Jun 26
2
[PATCH 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...st the urge to do these kinds of tweaks, > especially since it starts to hide the important change on the line. Of course. > >> -void native_flush_tlb_others(const struct cpumask *cpumask, >> - const struct flush_tlb_info *info) >> +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); >> + >> +void native_flush_tlb_multi(const struct cpumask *cpumask, >> + const struct flush_tlb_info *info) >> { >> + /* >> + * Do accounting and tracing. Note that there are (and have always been) >> + * cases in which a remote TLB flush will be...
2019 May 31
2
[RFC PATCH v2 04/12] x86/mm/tlb: Flush remote and local TLBs concurrently
...DOWN); } -static bool tlb_is_not_lazy(int cpu, void *data) +static inline bool tlb_is_not_lazy(int cpu) { return !per_cpu(cpu_tlbstate.is_lazy, cpu); } -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); + +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * native_flush_tlb_multi() can handle a single CPU, but it is + * suboptimal if the local TLB should be flushed, and therefore should + * not be used in such case. Check that it i...
2019 May 31
2
[RFC PATCH v2 04/12] x86/mm/tlb: Flush remote and local TLBs concurrently
...DOWN); } -static bool tlb_is_not_lazy(int cpu, void *data) +static inline bool tlb_is_not_lazy(int cpu) { return !per_cpu(cpu_tlbstate.is_lazy, cpu); } -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); + +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * native_flush_tlb_multi() can handle a single CPU, but it is + * suboptimal if the local TLB should be flushed, and therefore should + * not be used in such case. Check that it i...
2019 May 25
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...DOWN); } -static bool tlb_is_not_lazy(int cpu, void *data) +static inline bool tlb_is_not_lazy(int cpu) { return !per_cpu(cpu_tlbstate.is_lazy, cpu); } -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); + +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * native_flush_tlb_multi() can handle a single CPU, but it is + * suboptimal if the local TLB should be flushed, and therefore should + * not be used in such case. Check that it i...
2019 May 25
3
[RFC PATCH 5/6] x86/mm/tlb: Flush remote and local TLBs concurrently
...DOWN); } -static bool tlb_is_not_lazy(int cpu, void *data) +static inline bool tlb_is_not_lazy(int cpu) { return !per_cpu(cpu_tlbstate.is_lazy, cpu); } -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); + +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * native_flush_tlb_multi() can handle a single CPU, but it is + * suboptimal if the local TLB should be flushed, and therefore should + * not be used in such case. Check that it i...
2007 Jun 27
1
[PATCH 7/10] SMP support to Xen PM
...return -EBUSY; + pmprintk(XENLOG_INFO, "PM: Preparing system for %s sleep\n", acpi_states[state]); + + /* Sync all lazy states on other cpus, since APs will be + * re-intialized like fresh boot and stale context loses + */ + cpu_clear(0, mask); + flush_tlb_mask(mask); + pmprintk(XENLOG_INFO, "Finish lazy state sync\n"); + + disable_nonboot_cpus(); + if (num_online_cpus() != 1) { + error = -EBUSY; + goto Enable_cpu; + } local_irq_save(flags); @@ -141,20 +182,8 @@ int enter_state(u32 state) Done: local_irq...
2012 Oct 11
14
alloc_heap_pages is low efficient with more CPUs
I am confused with a problem: I have a blade with 64 physical CPUs and 64G physical RAM, and defined only one VM with 1 CPU and 40G RAM. For the first time I started the VM, it just took 3s, But for the second starting it took 30s. After studied it by printing log, I have located a place in the hypervisor where cost too much time, occupied 98% of the whole starting time. xen/common/page_alloc.c
2013 May 07
1
[PATCH V2] xen/arm: implement smp_call_function
..._interrupt(); + break; default: panic("Unhandled SGI %d on CPU%d\n", sgi, smp_processor_id()); break; diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c index 2a429bd..4042db5 100644 --- a/xen/arch/arm/smp.c +++ b/xen/arch/arm/smp.c @@ -11,17 +11,14 @@ void flush_tlb_mask(const cpumask_t *mask) flush_xen_data_tlb(); } -void smp_call_function( - void (*func) (void *info), - void *info, - int wait) +void smp_send_event_check_mask(const cpumask_t *mask) { - printk("%s not implmented\n", __func__); + send_SGI_mask(mask, GIC_SGI_EVENT_CH...
2019 Jul 19
0
[PATCH v3 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...ld be rare, with native_flush_tlb_others skipping + * This should be rare, with native_flush_tlb_multi() skipping * IPIs to lazy TLB mode CPUs. */ switch_mm_irqs_off(NULL, &init_mm, NULL); @@ -665,9 +665,14 @@ static bool tlb_is_not_lazy(int cpu) static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * Do accounting and tracing. Note that there are (and have always been) + *...
2007 Mar 20
62
RFC: [0/2] Remove netloop by lazy copying in netback
Hi Keir: These two patches remove the need for netloop by performing the copying in netback and only if it is necessary. The rationale is that most packets will be processed without delay allowing them to be freed without copying at all. So instead of copying every packet destined to dom0 we''ll only copy those that linger longer than a specified amount of time (currently 0.5s). As it
2019 Jul 02
0
[PATCH v2 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently
...} +void flush_tlb_func_local(const struct flush_tlb_info *info) +{ + __flush_tlb_func_local((void *)info); +} + static void flush_tlb_func_remote(void *info) { const struct flush_tlb_info *f = info; @@ -665,9 +670,14 @@ static bool tlb_is_not_lazy(int cpu) static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask); -void native_flush_tlb_others(const struct cpumask *cpumask, - const struct flush_tlb_info *info) +void native_flush_tlb_multi(const struct cpumask *cpumask, + const struct flush_tlb_info *info) { + /* + * Do accounting and tracing. Note that there are (and have always been) + *...
2019 Jul 02
2
[PATCH v2 0/9] x86: Concurrent TLB flushes
Currently, local and remote TLB flushes are not performed concurrently, which introduces unnecessary overhead - each INVLPG can take 100s of cycles. This patch-set allows TLB flushes to be run concurrently: first request the remote CPUs to initiate the flush, then run it locally, and finally wait for the remote CPUs to finish their work. In addition, there are various small optimizations to avoid
2019 Jul 19
5
[PATCH v3 0/9] x86: Concurrent TLB flushes
[ Cover-letter is identical to v2, including benchmark results, excluding the change log. ] Currently, local and remote TLB flushes are not performed concurrently, which introduces unnecessary overhead - each INVLPG can take 100s of cycles. This patch-set allows TLB flushes to be run concurrently: first request the remote CPUs to initiate the flush, then run it locally, and finally wait for
2012 Dec 10
26
[PATCH 00/11] Add virtual EPT support Xen.
From: Zhang Xiantao <xiantao.zhang@intel.com> With virtual EPT support, L1 hyerpvisor can use EPT hardware for L2 guest''s memory virtualization. In this way, L2 guest''s performance can be improved sharply. According to our testing, some benchmarks can show > 5x performance gain. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Zhang Xiantao (11):
2012 Jan 09
39
[PATCH v4 00/25] xen: ARMv7 with virtualization extensions
Hello everyone, this is the fourth version of the patch series that introduces ARMv7 with virtualization extensions support in Xen. The series allows Xen and Dom0 to boot on a Cortex-A15 based Versatile Express simulator. See the following announce email for more informations about what we are trying to achieve, as well as the original git history: See