search for: sev_act

Displaying 20 results from an estimated 69 matches for "sev_act".

Did you mean: set_acl
2019 May 08
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...r: GPL-2.0 */ > +#ifndef S390_MEM_ENCRYPT_H__ > +#define S390_MEM_ENCRYPT_H__ > + > +#ifndef __ASSEMBLY__ > + > +#define sme_me_mask 0ULL This is rather ugly, but I understand why it's there > + > +static inline bool sme_active(void) { return false; } > +extern bool sev_active(void); > + > +int set_memory_encrypted(unsigned long addr, int numpages); > +int set_memory_decrypted(unsigned long addr, int numpages); > + > +#endif /* __ASSEMBLY__ */ > + > +#endif /* S390_MEM_ENCRYPT_H__ */ > + > diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init...
2019 May 08
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...r: GPL-2.0 */ > +#ifndef S390_MEM_ENCRYPT_H__ > +#define S390_MEM_ENCRYPT_H__ > + > +#ifndef __ASSEMBLY__ > + > +#define sme_me_mask 0ULL This is rather ugly, but I understand why it's there > + > +static inline bool sme_active(void) { return false; } > +extern bool sev_active(void); > + > +int set_memory_encrypted(unsigned long addr, int numpages); > +int set_memory_decrypted(unsigned long addr, int numpages); > + > +#endif /* __ASSEMBLY__ */ > + > +#endif /* S390_MEM_ENCRYPT_H__ */ > + > diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init...
2019 Aug 10
3
[RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted
...at least allows > * all of the sensible Xen configurations to work correctly. > + * > + * Also, if guest memory is encrypted the host can't access > + * it directly. In this case, we'll need to use the DMA API. > */ > - if (xen_domain()) > + if (xen_domain() || sev_active()) > return true; > > return false; So I gave this lots of thought, and I'm coming round to basically accepting something very similar to this patch. But not exactly like this :). Let's see what are the requirements. If 1. We do not trust the device (so we want to use...
2019 Aug 10
3
[RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted
...at least allows > * all of the sensible Xen configurations to work correctly. > + * > + * Also, if guest memory is encrypted the host can't access > + * it directly. In this case, we'll need to use the DMA API. > */ > - if (xen_domain()) > + if (xen_domain() || sev_active()) > return true; > > return false; So I gave this lots of thought, and I'm coming round to basically accepting something very similar to this patch. But not exactly like this :). Let's see what are the requirements. If 1. We do not trust the device (so we want to use...
2019 Apr 26
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...888c --- /dev/null +++ b/arch/s390/include/asm/mem_encrypt.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef S390_MEM_ENCRYPT_H__ +#define S390_MEM_ENCRYPT_H__ + +#ifndef __ASSEMBLY__ + +#define sme_me_mask 0ULL + +static inline bool sme_active(void) { return false; } +extern bool sev_active(void); + +int set_memory_encrypted(unsigned long addr, int numpages); +int set_memory_decrypted(unsigned long addr, int numpages); + +#endif /* __ASSEMBLY__ */ + +#endif /* S390_MEM_ENCRYPT_H__ */ + diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 3e82f66d5c61..7e3cbd15dcfa 100644 --...
2019 May 09
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...efine S390_MEM_ENCRYPT_H__ > > + > > +#ifndef __ASSEMBLY__ > > + > > +#define sme_me_mask 0ULL > > This is rather ugly, but I understand why it's there > Nod. > > + > > +static inline bool sme_active(void) { return false; } > > +extern bool sev_active(void); > > + > > +int set_memory_encrypted(unsigned long addr, int numpages); > > +int set_memory_decrypted(unsigned long addr, int numpages); > > + > > +#endif /* __ASSEMBLY__ */ > > + > > +#endif /* S390_MEM_ENCRYPT_H__ */ > > + > > diff --...
2019 May 09
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
....h > @@ -0,0 +1,18 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef S390_MEM_ENCRYPT_H__ > +#define S390_MEM_ENCRYPT_H__ > + > +#ifndef __ASSEMBLY__ > + > +#define sme_me_mask??? 0ULL > + > +static inline bool sme_active(void) { return false; } > +extern bool sev_active(void); > + I noticed this patch always returns false for sme_active. Is it safe to assume that whatever fixups are required on x86 to deal with sme do not apply to s390? > +int set_memory_encrypted(unsigned long addr, int numpages); > +int set_memory_decrypted(unsigned long addr, int...
2019 Apr 09
0
[RFC PATCH 03/12] s390/mm: force swiotlb for protected virtualization
...igned long addr, int numpages) > +{ > + /* also called for the swiotlb bounce buffers, make all pages shared */ > + /* TODO: do ultravisor calls */ > + return 0; > +} > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > + > +/* are we a protected virtualization guest? */ > +bool sev_active(void) > +{ > + /* > + * TODO: Do proper detection using ultravisor, for now let us fake we > + * have it so the code gets exercised. That's the swiotlb stuff, right? (The patches will obviously need some reordering before it is actually getting merged.) > + */ > + re...
2019 Apr 26
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote: > +EXPORT_SYMBOL_GPL(set_memory_encrypted); > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > +EXPORT_SYMBOL_GPL(sev_active); Why do you export these? I know x86 exports those as well, but it shoudn't be needed there either.
2020 Jul 24
0
[PATCH v5 39/75] x86/sev-es: Print SEV-ES info into kernel log
...pr_info("AMD Memory Encryption Features active:"); + + /* Secure Memory Encryption */ + if (sme_active()) { + /* + * SME is mutually exclusive with any of the SEV + * features below. + */ + pr_cont(" SME\n"); + return; + } + + /* Secure Encrypted Virtualization */ + if (sev_active()) + pr_cont(" SEV"); + + /* Encrypted Register State */ + if (sev_es_active()) + pr_cont(" SEV-ES"); + + pr_cont("\n"); +} + /* Architecture __weak replacement functions */ void __init mem_encrypt_init(void) { @@ -422,8 +447,6 @@ void __init mem_encrypt_init(v...
2019 Apr 26
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote: > +EXPORT_SYMBOL_GPL(set_memory_encrypted); > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > +EXPORT_SYMBOL_GPL(sev_active); Why do you export these? I know x86 exports those as well, but it shoudn't be needed there either.
2020 Apr 28
0
[PATCH v3 38/75] x86/sev-es: Add SEV-ES Feature Detection
...crypt.h @@ -19,6 +19,7 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT extern u64 sme_me_mask; +extern u64 sev_status; extern bool sev_enabled; void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, @@ -49,6 +50,7 @@ void __init mem_encrypt_free_decrypted_mem(void); bool sme_active(void); bool sev_active(void); +bool sev_es_active(void); #define __bss_decrypted __attribute__((__section__(".bss..decrypted"))) @@ -71,6 +73,7 @@ static inline void __init sme_enable(struct boot_params *bp) { } static inline bool sme_active(void) { return false; } static inline bool sev_active(void...
2020 Jul 24
0
[PATCH v5 38/75] x86/sev-es: Add SEV-ES Feature Detection
...ude/asm/mem_encrypt.h @@ -19,6 +19,7 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT extern u64 sme_me_mask; +extern u64 sev_status; extern bool sev_enabled; void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, @@ -50,6 +51,7 @@ void __init mem_encrypt_init(void); bool sme_active(void); bool sev_active(void); +bool sev_es_active(void); #define __bss_decrypted __attribute__((__section__(".bss..decrypted"))) @@ -72,6 +74,7 @@ static inline void __init sme_enable(struct boot_params *bp) { } static inline bool sme_active(void) { return false; } static inline bool sev_active(void...
2020 Sep 07
0
[PATCH v7 36/72] x86/sev-es: Add SEV-ES Feature Detection
...ude/asm/mem_encrypt.h @@ -19,6 +19,7 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT extern u64 sme_me_mask; +extern u64 sev_status; extern bool sev_enabled; void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, @@ -50,6 +51,7 @@ void __init mem_encrypt_init(void); bool sme_active(void); bool sev_active(void); +bool sev_es_active(void); #define __bss_decrypted __attribute__((__section__(".bss..decrypted"))) @@ -72,6 +74,7 @@ static inline void __init sme_enable(struct boot_params *bp) { } static inline bool sme_active(void) { return false; } static inline bool sev_active(void...
2020 Aug 24
0
[PATCH v6 39/76] x86/sev-es: Add SEV-ES Feature Detection
...ude/asm/mem_encrypt.h @@ -19,6 +19,7 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT extern u64 sme_me_mask; +extern u64 sev_status; extern bool sev_enabled; void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, @@ -50,6 +51,7 @@ void __init mem_encrypt_init(void); bool sme_active(void); bool sev_active(void); +bool sev_es_active(void); #define __bss_decrypted __attribute__((__section__(".bss..decrypted"))) @@ -72,6 +74,7 @@ static inline void __init sme_enable(struct boot_params *bp) { } static inline bool sme_active(void) { return false; } static inline bool sev_active(void...
2019 Apr 09
0
[RFC PATCH 03/12] s390/mm: force swiotlb for protected virtualization
...wiotlb bounce buffers, make all pages shared */ > > > + /* TODO: do ultravisor calls */ > > > + return 0; > > > +} > > > +EXPORT_SYMBOL_GPL(set_memory_decrypted); > > > + > > > +/* are we a protected virtualization guest? */ > > > +bool sev_active(void) > > > +{ > > > + /* > > > + * TODO: Do proper detection using ultravisor, for now let us fake we > > > + * have it so the code gets exercised. > > > > That's the swiotlb stuff, right? > > > > You mean 'That'...
2019 Aug 11
8
[RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted
sev_active() is gone now in linux-next, at least as a global API. And once again this is entirely going in the wrong direction. The only way using the DMA API is going to work at all is if the device is ready for it. So we need a flag on the virtio device, exposed by the hypervisor (or hardware for hw v...
2019 Aug 11
8
[RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted
sev_active() is gone now in linux-next, at least as a global API. And once again this is entirely going in the wrong direction. The only way using the DMA API is going to work at all is if the device is ready for it. So we need a flag on the virtio device, exposed by the hypervisor (or hardware for hw v...
2019 Apr 29
1
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...0700 > Christoph Hellwig <hch at infradead.org> wrote: > >> On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote: >>> +EXPORT_SYMBOL_GPL(set_memory_encrypted); >> >>> +EXPORT_SYMBOL_GPL(set_memory_decrypted); >> >>> +EXPORT_SYMBOL_GPL(sev_active); >> >> Why do you export these? I know x86 exports those as well, but >> it shoudn't be needed there either. >> > > I export these to be in line with the x86 implementation (which > is the original and seems to be the only one at the moment). I assumed &gt...
2020 Feb 11
0
[PATCH 35/62] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
...arch/x86/include/asm/mem_encrypt.h @@ -48,6 +48,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size); void __init mem_encrypt_init(void); void __init mem_encrypt_free_decrypted_mem(void); +void __init encrypted_state_init_ghcbs(void); bool sme_active(void); bool sev_active(void); bool sev_es_active(void); @@ -71,6 +72,7 @@ static inline void __init sme_early_init(void) { } static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { } +static inline void encrypted_state_init_ghcbs(void...