Displaying 12 results from an estimated 12 matches for "xa_load".
2020 Jan 16
1
[PATCH v6 5/6] nouveau: use new mmu interval notifiers
...c
Look for the word 'implicit'
mlx5_ib_invalidate_range() releases the interval_notifier when there are
no populated shadow PTEs in its leaf
pagefault_implicit_mr() creates an interval_notifier that covers the
level in the page table that needs population. Notice it just uses an
unlocked xa_load to find the page table level.
The locking is pretty tricky as it relies on RCU, but the fault flow
is fairly lightweight.
Jason
2019 Aug 07
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...n(range, pfn) | cpu_flags;
> }
> - if (hmm_vma_walk->pgmap) {
> - put_dev_pagemap(hmm_vma_walk->pgmap);
> - hmm_vma_walk->pgmap = NULL;
Putting the value in the hmm_vma_walk would have made some sense to me
if the pgmap was not set to NULL all over the place. Then the most
xa_loads would be eliminated, as I would expect the pgmap tends to be
mostly uniform for these use cases.
Is there some reason the pgmap ref can't be held across
faulting/sleeping? ie like below.
Anyhow, I looked over this pretty carefully and the change looks
functionally OK, I just don't know w...
2019 Aug 07
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...vma_walk->pgmap) {
> > - put_dev_pagemap(hmm_vma_walk->pgmap);
> > - hmm_vma_walk->pgmap = NULL;
>
> Putting the value in the hmm_vma_walk would have made some sense to me
> if the pgmap was not set to NULL all over the place. Then the most
> xa_loads would be eliminated, as I would expect the pgmap tends to be
> mostly uniform for these use cases.
>
> Is there some reason the pgmap ref can't be held across
> faulting/sleeping? ie like below.
No restriction on holding refs over faulting / sleeping.
>
> Anyhow, I looked o...
2020 Apr 28
0
[PATCH v3 64/75] x86/sev-es: Cache CPUID results for improved performance
...gt;cx << 32;
+ break;
+ default:
+ hi = 0;
+ }
+
+ return hi | lo;
+}
+
+static bool sev_es_check_cpuid_cache(struct es_em_ctxt *ctxt,
+ unsigned long cache_index)
+{
+ struct sev_es_cpuid_cache_entry *cache_entry;
+
+ if (cache_index == ULONG_MAX)
+ return false;
+
+ cache_entry = xa_load(&sev_es_cpuid_cache, cache_index);
+ if (!cache_entry)
+ return false;
+
+ ctxt->regs->ax = cache_entry->eax;
+ ctxt->regs->bx = cache_entry->ebx;
+ ctxt->regs->cx = cache_entry->ecx;
+ ctxt->regs->dx = cache_entry->edx;
+
+ return true;
+}
+
+static void se...
2020 Jan 16
2
[PATCH v6 5/6] nouveau: use new mmu interval notifiers
On Wed, Jan 15, 2020 at 02:09:47PM -0800, Ralph Campbell wrote:
> I don't understand the lifetime/membership issue. The driver is the only thing
> that allocates, inserts, or removes struct mmu_interval_notifier and thus
> completely controls the lifetime.
If the returned value is on the defered list it could be freed at any
moment. The existing locks do not prevent it.
> >
2020 Mar 19
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...ed long pfn;
> + void *ptr;
> +
> + ptr = bounce->ptr + ((start - bounce->addr) & PAGE_MASK);
> +
> + for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) {
> + void *entry;
> + struct page *page;
> + void *tmp;
> +
> + entry = xa_load(&dmirror->pt, pfn);
> + page = xa_untag_pointer(entry);
> + if (!page)
> + return -ENOENT;
> +
> + tmp = kmap(page);
> + memcpy(ptr, tmp, PAGE_SIZE);
> + kunmap(page);
> +
> + ptr += PAGE_SIZE;
> + bounce->cpages++;
> + }
> +
> + return 0;
&...
2020 Mar 17
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/17/20 5:59 AM, Christoph Hellwig wrote:
> On Tue, Mar 17, 2020 at 09:47:55AM -0300, Jason Gunthorpe wrote:
>> I've been using v7 of Ralph's tester and it is working well - it has
>> DEVICE_PRIVATE support so I think it can test this flow too. Ralph are
>> you able?
>>
>> This hunk seems trivial enough to me, can we include it now?
>
> I can send
2020 Mar 19
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...gt; +
>> + ptr = bounce->ptr + ((start - bounce->addr) & PAGE_MASK);
>> +
>> + for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) {
>> + void *entry;
>> + struct page *page;
>> + void *tmp;
>> +
>> + entry = xa_load(&dmirror->pt, pfn);
>> + page = xa_untag_pointer(entry);
>> + if (!page)
>> + return -ENOENT;
>> +
>> + tmp = kmap(page);
>> + memcpy(ptr, tmp, PAGE_SIZE);
>> + kunmap(page);
>> +
>> + ptr += PAGE_SIZE;
>> + bounce->cpag...
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for
2019 Aug 06
24
hmm cleanups, v2
Hi Jérôme, Ben, Felix and Jason,
below is a series against the hmm tree which cleans up various minor
bits and allows HMM_MIRROR to be built on all architectures.
Diffstat:
11 files changed, 94 insertions(+), 210 deletions(-)
A git tree is also available at:
git://git.infradead.org/users/hch/misc.git hmm-cleanups.2
Gitweb:
2020 Apr 28
116
[PATCH v3 00/75] x86: SEV-ES Guest Support
Hi,
here is the next version of changes to enable Linux to run as an SEV-ES
guest. The code was rebased to v5.7-rc3 and got a fair number of changes
since the last version.
What is SEV-ES
==============
SEV-ES is an acronym for 'Secure Encrypted Virtualization - Encrypted
State' and means a hardware feature of AMD processors which hides the
register state of VCPUs to the hypervisor by
2020 Apr 28
116
[PATCH v3 00/75] x86: SEV-ES Guest Support
Hi,
here is the next version of changes to enable Linux to run as an SEV-ES
guest. The code was rebased to v5.7-rc3 and got a fair number of changes
since the last version.
What is SEV-ES
==============
SEV-ES is an acronym for 'Secure Encrypted Virtualization - Encrypted
State' and means a hardware feature of AMD processors which hides the
register state of VCPUs to the hypervisor by