Displaying 5 results from an estimated 5 matches for "log_dirty".
Did you mean:
log_dir
2012 Mar 07
4
[PATCH] xen: Make sure log-dirty is turned off before trying to dismantle it
...paging_domctl(struct domain *d, xen_
/* Call when destroying a domain */
void paging_teardown(struct domain *d)
{
+ /* Make sure log-dirty is turned off before trying to dismantle it.
+ * Needs to be done here becuse it''s covered by the hap/shadow lock */
+ d->arch.paging.log_dirty.disable_log_dirty(d);
+
if ( hap_enabled(d) )
hap_teardown(d);
else
2012 Nov 29
4
[PATCH] x86/hap: fix race condition between ENABLE_LOGDIRTY and track_dirty_vram hypercall
...save, libxl-save-helper).
So the race seldom happens, but the following cases are possible.
========================================================================
[case-1]
XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY hypercall
-> paging_enable_logdirty()
-> hap_logdirty_init()
-> paging_log_dirty_disable()
dirty_vram = NULL
-> paging_log_dirty_init(d, hap_enable_log_dirty, ...) ---> (A)
-> paging_log_dirty_enable()
**************************************************************************
/* <--- (B) */
-> hap_enable_vram_tracking() // should be hap_...
2013 Apr 03
1
[PATCH] vhost: Add vhost_commit callback for SeaBIOS ROM region re-mapping
...++++++++++++++++++++++---------------
hw/vhost.h | 3 +++
2 files changed, 41 insertions(+), 15 deletions(-)
diff --git a/hw/vhost.c b/hw/vhost.c
index 832cc89..00345f2 100644
--- a/hw/vhost.c
+++ b/hw/vhost.c
@@ -385,8 +385,6 @@ static void vhost_set_memory(MemoryListener *listener,
bool log_dirty = memory_region_is_logging(section->mr);
int s = offsetof(struct vhost_memory, regions) +
(dev->mem->nregions + 1) * sizeof dev->mem->regions[0];
- uint64_t log_size;
- int r;
void *ram;
dev->mem = g_realloc(dev->mem, s);
@@ -419,12 +417,47 @@ st...
2013 Apr 03
1
[PATCH] vhost: Add vhost_commit callback for SeaBIOS ROM region re-mapping
...++++++++++++++++++++++---------------
hw/vhost.h | 3 +++
2 files changed, 41 insertions(+), 15 deletions(-)
diff --git a/hw/vhost.c b/hw/vhost.c
index 832cc89..00345f2 100644
--- a/hw/vhost.c
+++ b/hw/vhost.c
@@ -385,8 +385,6 @@ static void vhost_set_memory(MemoryListener *listener,
bool log_dirty = memory_region_is_logging(section->mr);
int s = offsetof(struct vhost_memory, regions) +
(dev->mem->nregions + 1) * sizeof dev->mem->regions[0];
- uint64_t log_size;
- int r;
void *ram;
dev->mem = g_realloc(dev->mem, s);
@@ -419,12 +417,47 @@ st...
2011 Nov 08
48
Need help with fixing the Xen waitqueue feature
The patch ''mem_event: use wait queue when ring is full'' I just sent out
makes use of the waitqueue feature. There are two issues I get with the
change applied:
I think I got the logic right, and in my testing vcpu->pause_count drops
to zero in p2m_mem_paging_resume(). But for some reason the vcpu does
not make progress after the first wakeup. In my debugging there is one