Displaying 13 results from an estimated 13 matches for "memflags".
Did you mean:
mem_flags
2012 Dec 06
1
[PATCH] memop: adjust error checking in populate_physmap()
....com>
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -99,7 +99,8 @@ static void populate_physmap(struct memo
a->nr_extents-1) )
return;
- if ( !multipage_allocation_permitted(current->domain, a->extent_order) )
+ if ( a->memflags & MEMF_populate_on_demand ? a->extent_order > MAX_ORDER :
+ !multipage_allocation_permitted(current->domain, a->extent_order) )
return;
for ( i = a->nr_done; i < a->nr_extents; i++ )
@@ -115,8 +116,7 @@ static void populate_physmap(struct memo...
2013 Oct 30
4
Re: Issue with ARM: Network doesn't work in the guest
2013/10/29, mail fetch <fetchmail.0104@gmail.com>:
> Hi all,
>
> I just saw a know bug from wiki that network doesn''t work in guest in
> arndale board :
>
> Network doesn''t work in the guest
>
> Contact: julien.grall@citrix.com
> Status: In progress
> Description: Network doesn''t work in the guest when an ehternet cable is
> plugged
2013 Nov 14
4
[PATCH] xen/arm: Allow balooning working with 1:1 memory mapping
...struct page_info *page = NULL;
unsigned long i, j;
xen_pfn_t gpfn, mfn;
struct domain *d = a->domain;
@@ -122,7 +125,33 @@ static void populate_physmap(struct memop_args *a)
}
else
{
- page = alloc_domheap_pages(d, a->extent_order, a->memflags);
+#ifdef CONFIG_ARM
+ if ( d == dom0 && platform_has_quirk(PLATFORM_QUIRK_DOM0_MAPPING_11) )
+ {
+ mfn = gpfn;
+ if (!mfn_valid(mfn))
+ {
+ gdprintk(XENLOG_INFO, "Invalid mfn 0x%"PRI_xen_pfn&quo...
2006 Sep 29
0
[PATCH 2/6] xen: add per-node bucks to page allocator
...ges(MEMZONE_XEN, order);
+ pg = alloc_heap_pages(MEMZONE_XEN, smp_processor_id(), order);
local_irq_restore(flags);
if ( unlikely(pg == NULL) )
@@ -580,8 +637,9 @@ int assign_pages(
}
-struct page_info *alloc_domheap_pages(
- struct domain *d, unsigned int order, unsigned int memflags)
+struct page_info *__alloc_domheap_pages(
+ struct domain *d, unsigned int cpu, unsigned int order,
+ unsigned int memflags)
{
struct page_info *pg = NULL;
cpumask_t mask;
@@ -591,17 +649,17 @@ struct page_info *alloc_domheap_pages(
if ( !(memflags & MEMF_dma) )
{...
2007 Apr 10
7
PV domain save/restore break
I encounter PV domain restore failure on r14770. Are you guys aware of this?
========================================================================
[2007-04-10 09:57:24 4664] DEBUG (balloon:113) Balloon: 754076 KiB free; need
65536; done.
[2007-04-10 09:57:24 4664] DEBUG (XendCheckpoint:220) [xc_restore]: /usr/lib/xen/bin/xc_restore 24 4 1 2 0 0 0
[2007-04-10 09:57:24 4664] INFO
2012 Oct 11
14
alloc_heap_pages is low efficient with more CPUs
...n the hypervisor where cost too much time,
occupied 98% of the whole starting time.
xen/common/page_alloc.c
/* Allocate 2^@order contiguous pages. */
static struct page_info *alloc_heap_pages(
unsigned int zone_lo, unsigned int zone_hi,
unsigned int node, unsigned int order, unsigned int memflags)
{
if ( pg[i].u.free.need_tlbflush )
{
/* Add in extra CPUs that need flushing because of this page. */
cpus_andnot(extra_cpus_mask, cpu_online_map, mask);
tlbflush_filter(extra_cpus_mask, pg[i].tlbflush_timestamp);
cpus_or(mask, mask,...
2013 Dec 04
5
qemu-xen-dir + PCI passthrough = BOOM
Hey,
I just started noticing it today - with qemu-xen (tip is
commit b97307ecaad98360f41ea36cd9674ef810c4f8cf
xen_disk: mark ioreq as mapped before unmapping in error case)
when I try to pass in a PCI device at bootup it blows up with:
char device redirected to /dev/pts/2 (label serial0)
qemu: hardware error: xen: failed to populate ram at 40050000
CPU #0:
EAX=00000000 EBX=00000000
2013 Oct 15
29
[PATCH 0/4] Reintroduce OVMF support
This small series reintroduces OVMF support in Xen
You can fetch working OVMF tree on:
git://xenbits.xen.org/people/liuw/ovmf.git master
Working changeset that can be sticked in Config.mk is:
8833370303d3bf3153760ee42760ef1b9b5c562
Note that VNC doesn''t work properly when using OVMF, but that''s not OVMF''s
problem. This issue should be addressed in Xen and I''m
2013 Apr 24
15
Bare-metal Xen on ARM boot
Hi,
I was wondering if there is any documentation on how to write a bare metal
application for Xen. I don''t need to parse the device tree and such yet, a
simple booting "Hello World" would be fine :-)
We wrote one and when trying to boot we get ( this was an uncompressed
binary, no image):
libxl: notice: libxl_numa.c:451:libxl__get_numa_candidate: NUMA placement
failed,
2012 Aug 10
18
[PATCH v2 0/5] ARM hypercall ABI: 64 bit ready
Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it replaces all the occurrences of
2012 Aug 16
27
[PATCH v3 0/6] ARM hypercall ABI: 64 bit ready
Hi all,
this patch series makes the necessary changes to make sure that the
current ARM hypercall ABI can be used as-is on 64 bit ARM platforms:
- it defines xen_ulong_t as uint64_t on ARM;
- it introduces a new macro to handle guest pointers, called
XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to
have size 8 bytes on aarch64);
- it replaces all the occurrences of
2012 Jan 09
39
[PATCH v4 00/25] xen: ARMv7 with virtualization extensions
Hello everyone,
this is the fourth version of the patch series that introduces ARMv7
with virtualization extensions support in Xen.
The series allows Xen and Dom0 to boot on a Cortex-A15 based Versatile
Express simulator.
See the following announce email for more informations about what we
are trying to achieve, as well as the original git history:
See
2011 Dec 06
57
[PATCH RFC 00/25] xen: ARMv7 with virtualization extensions
Hello everyone,
this is the very first version of the patch series that introduces ARMv7
with virtualization extensions support in Xen.
The series allows Xen and Dom0 to boot on a Cortex-A15 based Versatile
Express simulator.
See the following announce email for more informations about what we
are trying to achieve, as well as the original git history:
See