Displaying 20 results from an estimated 100 matches similar to: "[PATCH V2] qemu-xen-traditionnal, Fix dirty logging during migration."
2010 Aug 12
59
[PATCH 00/15] RFC xen device model support
Hi all,
this is the long awaited patch series to add xen device model support in
qemu; the main author is Anthony Perard.
Developing this series we tried to come up with the cleanest possible
solution from the qemu point of view, limiting the amount of changes to
common code as much as possible. The end result still requires a couple
of hooks in piix_pci but overall the impact should be very
2008 Jul 08
0
[PATCH] stubdom: Fix modified_memory size calculation
stubdom: Fix modified_memory size calculation
>> is less prioritized than -
Signed-off-by: Samuel Thibault <samuel.thibault@eu.citrix.com>
diff -r 87954c7d407e tools/ioemu/target-i386-dm/exec-dm.c
--- a/tools/ioemu/target-i386-dm/exec-dm.c Fri Jul 04 19:52:08 2008 +0100
+++ b/tools/ioemu/target-i386-dm/exec-dm.c Tue Jul 08 12:17:23 2008 +0100
@@ -573,8 +573,8 @@
#ifdef
2016 Mar 03
2
[RFC qemu 4/4] migration: filter out guest's free pages in ram bulk stage
Get the free pages information through virtio and filter out the free
pages in the ram bulk stage. This can significantly reduce the total
live migration time as well as network traffic.
Signed-off-by: Liang Li <liang.z.li at intel.com>
---
migration/ram.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 46 insertions(+), 6 deletions(-)
diff --git
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2010 Oct 01
2
trouble building 4.0.1
I finally decided to build 4.0.1 on my OpenSuSE box. I''ve been plodding along and resolving issues/dependencies as needed but now I''m stumped. While building I get the following message:
cc1: warnings being treated as errors
netfront.c:41:32: error: variably modified ‘tx_freelist’ at file scope
netfront.c:44:34: error: variably modified ‘rx_buffers’ at file scope
2020 Apr 08
2
[PATCH 1/3] target/mips: Support variable page size
Traditionally, MIPS use 4KB page size, but Loongson prefer 16KB page
size in system emulator. So, let's define TARGET_PAGE_BITS_VARY and
TARGET_PAGE_BITS_MIN to support variable page size.
Cc: Jiaxun Yang <jiaxun.yang at flygoat.com>
Signed-off-by: Huacai Chen <chenhc at lemote.com>
---
target/mips/cpu-param.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git
2007 Oct 24
16
PATCH 0/10: Merge PV framebuffer & console into QEMU
The following series of 10 patches is a merge of the xenfb and xenconsoled
functionality into the qemu-dm code. The general approach taken is to have
qemu-dm provide two machine types - one for xen paravirt, the other for
fullyvirt. For compatability the later is the default. The goals overall
are to kill LibVNCServer, remove alot of code duplication and/or parallel
impls of the same concepts, and
2011 Jul 21
51
Linux Stubdom Problem
2011/7/19 Stefano Stabellini <stefano.stabellini@eu.citrix.com>:
> CC''ing Tim and xen-devel
>
> On Mon, 18 Jul 2011, Jiageng Yu wrote:
>> 2011/7/16 Stefano Stabellini <stefano.stabellini@eu.citrix.com>:
>> > On Fri, 15 Jul 2011, Jiageng Yu wrote:
>> >> 2011/7/15 Jiageng Yu <yujiageng734@gmail.com>:
>> >> > 2011/7/15
2012 Oct 08
21
[PATCH 00/14] Remove old_portio users for memory region PIO mapping
When running on PowerPC, we don''t have native PIO support. There are a few hacks
around to enable PIO access on PowerPC nevertheless.
The most typical one is the isa-mmio device. It takes MMIO requests and converts
them to PIO requests on the (QEMU internal) PIO bus.
This however is not how real hardware works and it limits us in the ability to
spawn eventfd''s on PIO ports
2012 Jan 12
4
[PATCH] qemu-dm: add command to flush buffer cache
Add support for a xenstore dm command to flush qemu''s buffer cache.
qemu will just keep mapping pages and not release them, which causes problems
for the memory pager (since the page is mapped, it won''t get paged out). When
the pager has trouble finding a page to page out, it asks qemu to flush its
buffer, which releases all the page mappings. This makes it possible to find
2007 Jan 26
12
[Patch] the interface of invalidating qemu mapcache
HVM balloon driver or something, that''s under development, may decrease
or increase the machine memory that is taken by HVM guest; in IA32/IA32e
host, now Qemu maps the physical memory of HVM guest based on little
blocks of memory (the block size is 64K in IA32 host or 1M in IA32E
host). When HVM balloon driver decreases the reserved machine memory of
HVM guest, Qemu should unmap the
2012 Oct 04
5
Bug#689646: xen-utils-4.1: fails to create HVM domU
Package: xen-utils-4.1
Version: 4.1.3-2
Severity: important
Dear Maintainer,
Creating a new HVM domU fails with the following error:
map shared IO page returned error 22
I've narrowed it down to qemu-dm which fails to start:
# /usr/lib/xen-4.1/bin/qemu-dm
[...]
qemu_map_cache_init nr_buckets = 10000 size 4194304
errno0 = 2
domid = -1
shared page at pfn 0
errno1 = 3
errno2 = 22
map shared IO
2008 Jan 09
4
[PATCH/RFC 0/2] CPU hotplug virtio driver
I'm sending a first draft of my proposed cpu hotplug driver for kvm/virtio
The first patch is the kernel module, while the second, the userspace pci device.
The host boots with the maximum cpus it should ever use, through the -smp parameter.
Due to real machine constraints (which qemu copies), i386 does not allow for any addition
of cpus after boot, so this is the most general way.
I do
2008 Jan 09
4
[PATCH/RFC 0/2] CPU hotplug virtio driver
I'm sending a first draft of my proposed cpu hotplug driver for kvm/virtio
The first patch is the kernel module, while the second, the userspace pci device.
The host boots with the maximum cpus it should ever use, through the -smp parameter.
Due to real machine constraints (which qemu copies), i386 does not allow for any addition
of cpus after boot, so this is the most general way.
I do
2006 Mar 24
2
[PATCH] qemu pcnet emulation fixes
The attached patch to the qemu emulation of the pcnet hardware fixes
several problems. It will now only read and write a transmit or receive
descriptor once. It will correctly handle transmitting frames with more
than two fragments. It will discard oversize frames instead of
corrupting memory. I have tested all the changes I have made and even
seen an improvement in receive performance from
2012 Apr 05
15
[PATCH 0/0] MSI/MSIX injection for Xen HVM guests
Implement a simple Xen APIC module and use it to deliver MSI/MSIX for
Xen HVM guests.
2011 May 04
4
[PATCH 0/3] virtio-net: 64 bit features, event index
OK, here's a patch that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
Support is added in both userspace and vhost-net.
I see nice performance improvements: e.g. from 12 to 18 Gbit/s host
to guest with netperf, but did not spend a lot of time testing
performance. I hope others will try this out and report.
Note: there
2011 May 04
4
[PATCH 0/3] virtio-net: 64 bit features, event index
OK, here's a patch that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
Support is added in both userspace and vhost-net.
I see nice performance improvements: e.g. from 12 to 18 Gbit/s host
to guest with netperf, but did not spend a lot of time testing
performance. I hope others will try this out and report.
Note: there
2007 Dec 21
0
[Virtio-for-kvm] [PATCH 1/7] userspace virtio
From 80b234220ea85d6fb291b0509ce2b3322e5ecc1f Mon Sep 17 00:00:00 2001
From: Dor Laor <dor.laor@qumranet.com>
Date: Wed, 19 Dec 2007 23:07:16 +0200
Subject: [PATCH] [PATCH 1/3] virtio infrastructure
This patch implements the basic infrastructure for virtio devices. These
devices are exposed to the guest as real PCI devices. The PCI vendor/device
IDs have been donated by Qumranet and the