Displaying 20 results from an estimated 66 matches for "set_memory".
2020 Apr 14
3
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
...gt; + /* Allocate GHCB pages */
>> + ghcb_page = __alloc_percpu(sizeof(struct ghcb), PAGE_SIZE);
>> +
>> + /* Initialize per-cpu GHCB pages */
>> + for_each_possible_cpu(cpu) {
>> + struct ghcb *ghcb = (struct ghcb *)per_cpu_ptr(ghcb_page, cpu);
>> +
>> + set_memory_decrypted((unsigned long)ghcb,
>> + sizeof(*ghcb) >> PAGE_SHIFT);
>> + memset(ghcb, 0, sizeof(*ghcb));
>> + }
>> +}
>> +
>
> set_memory_decrypted needs to check the return value. I see it
> consistently return ENOMEM. I've traced that back...
2020 Apr 14
3
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
...gt; + /* Allocate GHCB pages */
>> + ghcb_page = __alloc_percpu(sizeof(struct ghcb), PAGE_SIZE);
>> +
>> + /* Initialize per-cpu GHCB pages */
>> + for_each_possible_cpu(cpu) {
>> + struct ghcb *ghcb = (struct ghcb *)per_cpu_ptr(ghcb_page, cpu);
>> +
>> + set_memory_decrypted((unsigned long)ghcb,
>> + sizeof(*ghcb) >> PAGE_SHIFT);
>> + memset(ghcb, 0, sizeof(*ghcb));
>> + }
>> +}
>> +
>
> set_memory_decrypted needs to check the return value. I see it
> consistently return ENOMEM. I've traced that back...
2010 May 13
0
[PATCH matahari] Moving QMF functionality into a transport layer.
...emory = sysinf.totalram / 1024L;
- }
- else
- {
- throw runtime_error("Unable to retrieve system memory details.");
- }
+string
+Host::get_architecture() const
+{
+ return _architecture;
+}
- cout << "memory: " << memory << endl;
+void
+Host::set_memory(const unsigned int memory)
+{
+ _memory = memory;
+}
- management_object->set_uuid(uuid);
- management_object->set_hostname(hostname);
- management_object->set_hypervisor(hypervisor);
- management_object->set_arch(architecture);
- management_object->set_memory(memory);
- man...
2019 Apr 26
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote:
> +EXPORT_SYMBOL_GPL(set_memory_encrypted);
> +EXPORT_SYMBOL_GPL(set_memory_decrypted);
> +EXPORT_SYMBOL_GPL(sev_active);
Why do you export these? I know x86 exports those as well, but
it shoudn't be needed there either.
2019 Apr 29
1
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On 29.04.19 15:59, Halil Pasic wrote:
> On Fri, 26 Apr 2019 12:27:11 -0700
> Christoph Hellwig <hch at infradead.org> wrote:
>
>> On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote:
>>> +EXPORT_SYMBOL_GPL(set_memory_encrypted);
>>
>>> +EXPORT_SYMBOL_GPL(set_memory_decrypted);
>>
>>> +EXPORT_SYMBOL_GPL(sev_active);
>>
>> Why do you export these? I know x86 exports those as well, but
>> it shoudn't be needed there either.
>>
>
> I export these to...
2020 Apr 14
1
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
On 4/14/20 3:12 PM, Dave Hansen wrote:
> On 4/14/20 1:04 PM, Tom Lendacky wrote:
>>> set_memory_decrypted needs to check the return value. I see it
>>> consistently return ENOMEM. I've traced that back to split_large_page
>>> in arch/x86/mm/pat/set_memory.c.
>>
>> At that point the guest won't be able to communicate with the
>> hypervisor, too. Mayb...
2019 Apr 26
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote:
> +EXPORT_SYMBOL_GPL(set_memory_encrypted);
> +EXPORT_SYMBOL_GPL(set_memory_decrypted);
> +EXPORT_SYMBOL_GPL(sev_active);
Why do you export these? I know x86 exports those as well, but
it shoudn't be needed there either.
2010 Apr 15
1
[PATCH matahari] Refactored the Host agent.
...ST_NAME_MAX];
+ ret = gethostname(hostname_c, sizeof(hostname_c));
+ if (ret != 0)
+ throw runtime_error("Unable to get hostname");
+ string hostname(hostname_c);
+ management_object->set_hostname(hostname);
+
+ // Hypervisor, arch, memory
+ management_object->set_memory(0);
+ management_object->set_hypervisor("unknown");
+ management_object->set_arch("unknown");
+
+ virConnectPtr connection;
+ virNodeInfo info;
+ connection = virConnectOpenReadOnly(NULL);
+ if (connection) {
+ const char *hv = virConnectGetType(conn...
2019 May 08
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...M_ENCRYPT_H__
> +#define S390_MEM_ENCRYPT_H__
> +
> +#ifndef __ASSEMBLY__
> +
> +#define sme_me_mask 0ULL
This is rather ugly, but I understand why it's there
> +
> +static inline bool sme_active(void) { return false; }
> +extern bool sev_active(void);
> +
> +int set_memory_encrypted(unsigned long addr, int numpages);
> +int set_memory_decrypted(unsigned long addr, int numpages);
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* S390_MEM_ENCRYPT_H__ */
> +
> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
> index 3e82f66d5c61..7e3cbd15dc...
2019 May 08
2
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...M_ENCRYPT_H__
> +#define S390_MEM_ENCRYPT_H__
> +
> +#ifndef __ASSEMBLY__
> +
> +#define sme_me_mask 0ULL
This is rather ugly, but I understand why it's there
> +
> +static inline bool sme_active(void) { return false; }
> +extern bool sev_active(void);
> +
> +int set_memory_encrypted(unsigned long addr, int numpages);
> +int set_memory_decrypted(unsigned long addr, int numpages);
> +
> +#endif /* __ASSEMBLY__ */
> +
> +#endif /* S390_MEM_ENCRYPT_H__ */
> +
> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
> index 3e82f66d5c61..7e3cbd15dc...
2019 Apr 29
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
On Fri, 26 Apr 2019 12:27:11 -0700
Christoph Hellwig <hch at infradead.org> wrote:
> On Fri, Apr 26, 2019 at 08:32:39PM +0200, Halil Pasic wrote:
> > +EXPORT_SYMBOL_GPL(set_memory_encrypted);
>
> > +EXPORT_SYMBOL_GPL(set_memory_decrypted);
>
> > +EXPORT_SYMBOL_GPL(sev_active);
>
> Why do you export these? I know x86 exports those as well, but
> it shoudn't be needed there either.
>
I export these to be in line with the x86 implementati...
2020 Apr 14
0
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
On 4/14/20 1:04 PM, Tom Lendacky wrote:
>> set_memory_decrypted needs to check the return value. I see it
>> consistently return ENOMEM. I've traced that back to split_large_page
>> in arch/x86/mm/pat/set_memory.c.
>
> At that point the guest won't be able to communicate with the
> hypervisor, too. Maybe we should BUG() h...
2020 Apr 15
0
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
Hi Mike,
On Tue, Apr 14, 2020 at 07:03:44PM +0000, Mike Stunes wrote:
> set_memory_decrypted needs to check the return value. I see it
> consistently return ENOMEM. I've traced that back to split_large_page
> in arch/x86/mm/pat/set_memory.c.
I agree that the return code needs to be checked. But I wonder why this
happens. The split_large_page() function returns -ENOMEM...
2020 Apr 23
0
[PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
On Wed, Apr 22, 2020 at 06:33:13PM -0700, Bo Gan wrote:
> On 4/15/20 8:53 AM, Joerg Roedel wrote:
> > Hi Mike,
> >
> > On Tue, Apr 14, 2020 at 07:03:44PM +0000, Mike Stunes wrote:
> > > set_memory_decrypted needs to check the return value. I see it
> > > consistently return ENOMEM. I've traced that back to split_large_page
> > > in arch/x86/mm/pat/set_memory.c.
> >
> > I agree that the return code needs to be checked. But I wonder why this
> > happens...
2020 Feb 11
0
[PATCH 35/62] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
...3 +++
3 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 6f61bb93366a..d48e7be9bb49 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -48,6 +48,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
void __init mem_encrypt_init(void);
void __init mem_encrypt_free_decrypted_mem(void);
+void __init encrypted_state_init_ghcbs(void);
bool sme_active(void);
bool sev_active(void);
bool sev_es_active(void);
@@ -71,6 +72,7 @@ static inline voi...
2020 Feb 11
1
[PATCH 35/62] x86/sev-es: Setup per-cpu GHCBs for the runtime handler
...nsertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
> index 6f61bb93366a..d48e7be9bb49 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -48,6 +48,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
> void __init mem_encrypt_init(void);
> void __init mem_encrypt_free_decrypted_mem(void);
>
> +void __init encrypted_state_init_ghcbs(void);
> bool sme_active(void);
> bool sev_active(void);
> bool sev_es_active(void);
&g...
2019 Apr 26
0
[PATCH 04/10] s390/mm: force swiotlb for protected virtualization
...arch/s390/include/asm/mem_encrypt.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef S390_MEM_ENCRYPT_H__
+#define S390_MEM_ENCRYPT_H__
+
+#ifndef __ASSEMBLY__
+
+#define sme_me_mask 0ULL
+
+static inline bool sme_active(void) { return false; }
+extern bool sev_active(void);
+
+int set_memory_encrypted(unsigned long addr, int numpages);
+int set_memory_decrypted(unsigned long addr, int numpages);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* S390_MEM_ENCRYPT_H__ */
+
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 3e82f66d5c61..7e3cbd15dcfa 100644
--- a/arch/s390/mm/init.c
+++...
2019 Jun 06
0
[PATCH v4 1/8] s390/mm: force swiotlb for protected virtualization
...arch/s390/include/asm/mem_encrypt.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef S390_MEM_ENCRYPT_H__
+#define S390_MEM_ENCRYPT_H__
+
+#ifndef __ASSEMBLY__
+
+#define sme_me_mask 0ULL
+
+static inline bool sme_active(void) { return false; }
+extern bool sev_active(void);
+
+int set_memory_encrypted(unsigned long addr, int numpages);
+int set_memory_decrypted(unsigned long addr, int numpages);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* S390_MEM_ENCRYPT_H__ */
+
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 14d1eae9fe43..f0bee6af3960 100644
--- a/arch/s390/mm/init.c
+++...
2019 Jun 12
0
[PATCH v5 1/8] s390/mm: force swiotlb for protected virtualization
...arch/s390/include/asm/mem_encrypt.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef S390_MEM_ENCRYPT_H__
+#define S390_MEM_ENCRYPT_H__
+
+#ifndef __ASSEMBLY__
+
+#define sme_me_mask 0ULL
+
+static inline bool sme_active(void) { return false; }
+extern bool sev_active(void);
+
+int set_memory_encrypted(unsigned long addr, int numpages);
+int set_memory_decrypted(unsigned long addr, int numpages);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* S390_MEM_ENCRYPT_H__ */
+
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 14d1eae9fe43..f0bee6af3960 100644
--- a/arch/s390/mm/init.c
+++...
2019 May 23
0
[PATCH v2 1/8] s390/mm: force swiotlb for protected virtualization
...arch/s390/include/asm/mem_encrypt.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef S390_MEM_ENCRYPT_H__
+#define S390_MEM_ENCRYPT_H__
+
+#ifndef __ASSEMBLY__
+
+#define sme_me_mask 0ULL
+
+static inline bool sme_active(void) { return false; }
+extern bool sev_active(void);
+
+int set_memory_encrypted(unsigned long addr, int numpages);
+int set_memory_decrypted(unsigned long addr, int numpages);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* S390_MEM_ENCRYPT_H__ */
+
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 14d1eae9fe43..f0bee6af3960 100644
--- a/arch/s390/mm/init.c
+++...