search for: set_memory_decrypted

Displaying 20 results from an estimated 37 matches for "set_memory_decrypted".

2019 May 08
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...def __ASSEMBLY__ > + > +#define sme_me_mask 0ULL This is rather ugly, but I understand why it's there > + > +static inline bool sme_active(void) { return false; } > +extern bool sev_active(void); > + > +int set_memory_encrypted(unsigned long addr, int numpages); > +int set_memory_decrypted(unsigned long addr, int numpages); > + > +#endif /* __ASSEMBLY__ */ > + > +#endif /* S390_MEM_ENCRYPT_H__ */ > + > diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c > index 3e82f66d5c61..7e3cbd15dcfa 100644 > --- a/arch/s390/mm/init.c > +++ b/arch/s390/mm/init.c >...
2019 May 08
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...def __ASSEMBLY__ > + > +#define sme_me_mask 0ULL This is rather ugly, but I understand why it's there > + > +static inline bool sme_active(void) { return false; } > +extern bool sev_active(void); > + > +int set_memory_encrypted(unsigned long addr, int numpages); > +int set_memory_decrypted(unsigned long addr, int numpages); > + > +#endif /* __ASSEMBLY__ */ > + > +#endif /* S390_MEM_ENCRYPT_H__ */ > + > diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c > index 3e82f66d5c61..7e3cbd15dcfa 100644 > --- a/arch/s390/mm/init.c > +++ b/arch/s390/mm/init.c >...
2020 Apr 14
3
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
...gt; + /* Allocate GHCB pages */ >> + ghcb_page = __alloc_percpu(sizeof(struct ghcb), PAGE_SIZE); >> + >> + /* Initialize per-cpu GHCB pages */ >> + for_each_possible_cpu(cpu) { >> + struct ghcb *ghcb = (struct ghcb *)per_cpu_ptr(ghcb_page, cpu); >> + >> + set_memory_decrypted((unsigned long)ghcb, >> + sizeof(*ghcb) >> PAGE_SHIFT); >> + memset(ghcb, 0, sizeof(*ghcb)); >> + } >> +} >> + > > set_memory_decrypted needs to check the return value. I see it > consistently return ENOMEM. I've traced that back to split_l...
2020 Apr 14
3
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
...gt; + /* Allocate GHCB pages */ >> + ghcb_page = __alloc_percpu(sizeof(struct ghcb), PAGE_SIZE); >> + >> + /* Initialize per-cpu GHCB pages */ >> + for_each_possible_cpu(cpu) { >> + struct ghcb *ghcb = (struct ghcb *)per_cpu_ptr(ghcb_page, cpu); >> + >> + set_memory_decrypted((unsigned long)ghcb, >> + sizeof(*ghcb) >> PAGE_SHIFT); >> + memset(ghcb, 0, sizeof(*ghcb)); >> + } >> +} >> + > > set_memory_decrypted needs to check the return value. I see it > consistently return ENOMEM. I've traced that back to split_l...
2019 Apr 26
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote: > +EXPORT_SYMBOL_GPL(set_memory_encrypted); > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > +EXPORT_SYMBOL_GPL(sev_active); Why do you export these? I know x86 exports those as well, but it shoudn't be needed there either.
2020 Apr 23
0
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
On Wed, Apr 22, 2020 at 06:33:13PM -0700, Bo Gan wrote: > On 4/15/20 8:53 AM, Joerg Roedel wrote: > > Hi Mike, > > > > On Tue, Apr 14, 2020 at 07:03:44PM +0000, Mike Stunes wrote: > > > set_memory_decrypted needs to check the return value. I see it > > > consistently return ENOMEM. I've traced that back to split_large_page > > > in arch/x86/mm/pat/set_memory.c. > > > > I agree that the return code needs to be checked. But I wonder why this > > happens. The spli...
2019 Apr 26
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote: > +EXPORT_SYMBOL_GPL(set_memory_encrypted); > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > +EXPORT_SYMBOL_GPL(sev_active); Why do you export these? I know x86 exports those as well, but it shoudn't be needed there either.
2019 Apr 26
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...-License-Identifier: GPL-2.0 */ +#ifndef S390_MEM_ENCRYPT_H__ +#define S390_MEM_ENCRYPT_H__ + +#ifndef __ASSEMBLY__ + +#define sme_me_mask 0ULL + +static inline bool sme_active(void) { return false; } +extern bool sev_active(void); + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + +#endif /* __ASSEMBLY__ */ + +#endif /* S390_MEM_ENCRYPT_H__ */ + diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 3e82f66d5c61..7e3cbd15dcfa 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -18,6 +18,7 @@ #include <linux/mman.h>...
2019 May 09
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...LL > > This is rather ugly, but I understand why it's there > Nod. > > + > > +static inline bool sme_active(void) { return false; } > > +extern bool sev_active(void); > > + > > +int set_memory_encrypted(unsigned long addr, int numpages); > > +int set_memory_decrypted(unsigned long addr, int numpages); > > + > > +#endif /* __ASSEMBLY__ */ > > + > > +#endif /* S390_MEM_ENCRYPT_H__ */ > > + > > diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c > > index 3e82f66d5c61..7e3cbd15dcfa 100644 > > --- a/arch/s390/mm/in...
2019 May 09
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...{ return false; } > +extern bool sev_active(void); > + I noticed this patch always returns false for sme_active. Is it safe to assume that whatever fixups are required on x86 to deal with sme do not apply to s390? > +int set_memory_encrypted(unsigned long addr, int numpages); > +int set_memory_decrypted(unsigned long addr, int numpages); > + > +#endif??? /* __ASSEMBLY__ */ > + > +#endif??? /* S390_MEM_ENCRYPT_H__ */ > + > diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c > index 3e82f66d5c61..7e3cbd15dcfa 100644 > --- a/arch/s390/mm/init.c > +++ b/arch/s390/mm/init....
2019 Apr 09
0
[RFC PATCH 03/12] s390/mm: force swiotlb for protected virtualization
...10); > } > > +int set_memory_encrypted(unsigned long addr, int numpages) > +{ > + /* also called for the swiotlb bounce buffers, make all pages shared */ > + /* TODO: do ultravisor calls */ > + return 0; > +} > +EXPORT_SYMBOL_GPL(set_memory_encrypted); > + > +int set_memory_decrypted(unsigned long addr, int numpages) > +{ > + /* also called for the swiotlb bounce buffers, make all pages shared */ > + /* TODO: do ultravisor calls */ > + return 0; > +} > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > + > +/* are we a protected virtualization guest? */ > +b...
2019 Apr 29
1
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...19 15:59, Halil Pasic wrote: > On Fri, 26 Apr 2019 12:27:11 -0700 > Christoph Hellwig <hch at infradead.org> wrote: > >> On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote: >>> +EXPORT_SYMBOL_GPL(set_memory_encrypted); >> >>> +EXPORT_SYMBOL_GPL(set_memory_decrypted); >> >>> +EXPORT_SYMBOL_GPL(sev_active); >> >> Why do you export these? I know x86 exports those as well, but >> it shoudn't be needed there either. >> > > I export these to be in line with the x86 implementation (which > is the original and see...
2020 Apr 14
1
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
On 4/14/20 3:12 PM, Dave Hansen wrote: > On 4/14/20 1:04 PM, Tom Lendacky wrote: >>> set_memory_decrypted needs to check the return value. I see it >>> consistently return ENOMEM. I've traced that back to split_large_page >>> in arch/x86/mm/pat/set_memory.c. >> >> At that point the guest won't be able to communicate with the >> hypervisor, too. Maybe we shoul...
2019 Jun 06
0
[PATCH v4 1/8] s390/mm: force swiotlb for protected virtualization
...-License-Identifier: GPL-2.0 */ +#ifndef S390_MEM_ENCRYPT_H__ +#define S390_MEM_ENCRYPT_H__ + +#ifndef __ASSEMBLY__ + +#define sme_me_mask 0ULL + +static inline bool sme_active(void) { return false; } +extern bool sev_active(void); + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + +#endif /* __ASSEMBLY__ */ + +#endif /* S390_MEM_ENCRYPT_H__ */ + diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 14d1eae9fe43..f0bee6af3960 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -18,6 +18,7 @@ #include <linux/mman.h>...
2019 Jun 12
0
[PATCH v5 1/8] s390/mm: force swiotlb for protected virtualization
...-License-Identifier: GPL-2.0 */ +#ifndef S390_MEM_ENCRYPT_H__ +#define S390_MEM_ENCRYPT_H__ + +#ifndef __ASSEMBLY__ + +#define sme_me_mask 0ULL + +static inline bool sme_active(void) { return false; } +extern bool sev_active(void); + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + +#endif /* __ASSEMBLY__ */ + +#endif /* S390_MEM_ENCRYPT_H__ */ + diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 14d1eae9fe43..f0bee6af3960 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -18,6 +18,7 @@ #include <linux/mman.h>...
2019 May 23
0
[PATCH v2 1/8] s390/mm: force swiotlb for protected virtualization
...-License-Identifier: GPL-2.0 */ +#ifndef S390_MEM_ENCRYPT_H__ +#define S390_MEM_ENCRYPT_H__ + +#ifndef __ASSEMBLY__ + +#define sme_me_mask 0ULL + +static inline bool sme_active(void) { return false; } +extern bool sev_active(void); + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + +#endif /* __ASSEMBLY__ */ + +#endif /* S390_MEM_ENCRYPT_H__ */ + diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 14d1eae9fe43..f0bee6af3960 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -18,6 +18,7 @@ #include <linux/mman.h>...
2019 May 29
0
[PATCH v3 1/8] s390/mm: force swiotlb for protected virtualization
...-License-Identifier: GPL-2.0 */ +#ifndef S390_MEM_ENCRYPT_H__ +#define S390_MEM_ENCRYPT_H__ + +#ifndef __ASSEMBLY__ + +#define sme_me_mask 0ULL + +static inline bool sme_active(void) { return false; } +extern bool sev_active(void); + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + +#endif /* __ASSEMBLY__ */ + +#endif /* S390_MEM_ENCRYPT_H__ */ + diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 14d1eae9fe43..f0bee6af3960 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -18,6 +18,7 @@ #include <linux/mman.h>...
2019 Apr 09
0
[RFC PATCH 03/12] s390/mm: force swiotlb for protected virtualization
...ges) > > > +{ > > > + /* also called for the swiotlb bounce buffers, make all pages shared */ > > > + /* TODO: do ultravisor calls */ > > > + return 0; > > > +} > > > +EXPORT_SYMBOL_GPL(set_memory_encrypted); > > > + > > > +int set_memory_decrypted(unsigned long addr, int numpages) > > > +{ > > > + /* also called for the swiotlb bounce buffers, make all pages shared */ > > > + /* TODO: do ultravisor calls */ > > > + return 0; > > > +} > > > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > &g...
2019 Apr 29
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On Fri, 26 Apr 2019 12:27:11 -0700 Christoph Hellwig <hch at infradead.org> wrote: > On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote: > > +EXPORT_SYMBOL_GPL(set_memory_encrypted); > > > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > > > +EXPORT_SYMBOL_GPL(sev_active); > > Why do you export these? I know x86 exports those as well, but > it shoudn't be needed there either. > I export these to be in line with the x86 implementation (which is the original and seems to be the only one at the moment...
2020 Apr 14
0
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
On 4/14/20 1:04 PM, Tom Lendacky wrote: >> set_memory_decrypted needs to check the return value. I see it >> consistently return ENOMEM. I've traced that back to split_large_page >> in arch/x86/mm/pat/set_memory.c. > > At that point the guest won't be able to communicate with the > hypervisor, too. Maybe we should BUG() here to ter...