Displaying 11 results from an estimated 11 matches for "target_page_bits".
2012 Jul 23
2
[PATCH V2] qemu-xen-traditionnal, Fix dirty logging during migration.
...unsigned long addr = (paddr_index << MCACHE_BUCKET_SHIFT)
+ + ((unsigned long)buffer) - ((unsigned long)entry->vaddr_base);
+ if (access_len == 0)
+ access_len = TARGET_PAGE_SIZE;
+ xc_hvm_modified_memory(xc_handle, domid,
+ addr >> TARGET_PAGE_BITS,
+ ((addr + access_len + TARGET_PAGE_SIZE - 1) >> TARGET_PAGE_BITS)
+ - (addr >> TARGET_PAGE_BITS));
+ }
+
entry->lock--;
if (entry->lock > 0 || pentry == NULL)
return;
@@ -265,7 +277,7 @@ uint8_t *qemu_map_cache(target_phys_addr_t ph...
2008 Jul 08
0
[PATCH] stubdom: Fix modified_memory size calculation
...exec-dm.c
--- a/tools/ioemu/target-i386-dm/exec-dm.c Fri Jul 04 19:52:08 2008 +0100
+++ b/tools/ioemu/target-i386-dm/exec-dm.c Tue Jul 08 12:17:23 2008 +0100
@@ -573,8 +573,8 @@
#ifdef CONFIG_STUBDOM
if (logdirty_bitmap != NULL)
xc_hvm_modified_memory(xc_handle, domid, _addr >> TARGET_PAGE_BITS,
- (_addr + _len + TARGET_PAGE_SIZE - 1) >> TARGET_PAGE_BITS
- - _addr >> TARGET_PAGE_BITS);
+ ((_addr + _len + TARGET_PAGE_SIZE - 1) >> TARGET_PAGE_BITS)
+ - (_addr >> TARGET_PAGE_BITS));
#endif
map...
2016 Mar 03
2
[RFC qemu 4/4] migration: filter out guest's free pages in ram bulk stage
...xt;
bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
- if (ram_bulk_stage && nr > base) {
- next = nr + 1;
- } else {
- next = find_next_bit(bitmap, size, nr);
- }
-
+ next = find_next_bit(bitmap, size, nr);
*ram_addr_abs = next << TARGET_PAGE_BITS;
return (next - base) << TARGET_PAGE_BITS;
}
@@ -1415,6 +1412,9 @@ void free_xbzrle_decoded_buf(void)
static void migration_bitmap_free(struct BitmapRcu *bmap)
{
g_free(bmap->bmap);
+ if (balloon_free_pages_support()) {
+ g_free(bmap->free_pages_bmap);
+ }...
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2020 Apr 08
2
[PATCH 1/3] target/mips: Support variable page size
Traditionally, MIPS use 4KB page size, but Loongson prefer 16KB page
size in system emulator. So, let's define TARGET_PAGE_BITS_VARY and
TARGET_PAGE_BITS_MIN to support variable page size.
Cc: Jiaxun Yang <jiaxun.yang at flygoat.com>
Signed-off-by: Huacai Chen <chenhc at lemote.com>
---
target/mips/cpu-param.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/target/mips/cpu-param.h b/target/mips/cpu-pa...
2010 Aug 12
59
[PATCH 00/15] RFC xen device model support
Hi all,
this is the long awaited patch series to add xen device model support in
qemu; the main author is Anthony Perard.
Developing this series we tried to come up with the cleanest possible
solution from the qemu point of view, limiting the amount of changes to
common code as much as possible. The end result still requires a couple
of hooks in piix_pci but overall the impact should be very
2008 Jan 09
4
[PATCH/RFC 0/2] CPU hotplug virtio driver
I'm sending a first draft of my proposed cpu hotplug driver for kvm/virtio
The first patch is the kernel module, while the second, the userspace pci device.
The host boots with the maximum cpus it should ever use, through the -smp parameter.
Due to real machine constraints (which qemu copies), i386 does not allow for any addition
of cpus after boot, so this is the most general way.
I do
2008 Jan 09
4
[PATCH/RFC 0/2] CPU hotplug virtio driver
I'm sending a first draft of my proposed cpu hotplug driver for kvm/virtio
The first patch is the kernel module, while the second, the userspace pci device.
The host boots with the maximum cpus it should ever use, through the -smp parameter.
Due to real machine constraints (which qemu copies), i386 does not allow for any addition
of cpus after boot, so this is the most general way.
I do
2007 Dec 21
0
[Virtio-for-kvm] [PATCH 1/7] userspace virtio
...que);
+ ram_addr_t pa;
+
+ addr -= vdev->addr;
+
+ switch (addr) {
+ case VIRTIO_PCI_GUEST_FEATURES:
+ if (vdev->set_features)
+ vdev->set_features(vdev, val);
+ vdev->features = val;
+ break;
+ case VIRTIO_PCI_QUEUE_PFN:
+ pa = (ram_addr_t)val << TARGET_PAGE_BITS;
+ vdev->vq[vdev->queue_sel].pfn = val;
+ if (pa == 0) {
+ vdev->vq[vdev->queue_sel].vring.desc = NULL;
+ vdev->vq[vdev->queue_sel].vring.avail = NULL;
+ vdev->vq[vdev->queue_sel].vring.used = NULL;
+ } else if (pa < (ram_size - TARGET_PAGE_S...
2007 Dec 21
0
[Virtio-for-kvm] [PATCH 1/7] userspace virtio
...que);
+ ram_addr_t pa;
+
+ addr -= vdev->addr;
+
+ switch (addr) {
+ case VIRTIO_PCI_GUEST_FEATURES:
+ if (vdev->set_features)
+ vdev->set_features(vdev, val);
+ vdev->features = val;
+ break;
+ case VIRTIO_PCI_QUEUE_PFN:
+ pa = (ram_addr_t)val << TARGET_PAGE_BITS;
+ vdev->vq[vdev->queue_sel].pfn = val;
+ if (pa == 0) {
+ vdev->vq[vdev->queue_sel].vring.desc = NULL;
+ vdev->vq[vdev->queue_sel].vring.avail = NULL;
+ vdev->vq[vdev->queue_sel].vring.used = NULL;
+ } else if (pa < (ram_size - TARGET_PAGE_S...