search for: track_dirty_vram

Displaying 8 results from an estimated 8 matches for "track_dirty_vram".

2010 Jan 12
5
Windows 7 safe mode with networking on Xen 3.4.2?
...addr=0x1f6d, val=0x0. gpe_en_write: addr=0x1f6e, val=0x0. gpe_en_write: addr=0x1f6f, val=0x0. gpe_en_write: addr=0x1f6c, val=0x8. gpe_en_write: addr=0x1f6d, val=0x0. gpe_en_write: addr=0x1f6e, val=0x0. gpe_en_write: addr=0x1f6f, val=0x0. reset requested in cpu_handle_ioreq. Issued domain 15 reboot track_dirty_vram(f0000000, 12c) failed (-1, 3) track_dirty_vram(f0000000, 12c) failed (-1, 3) Andy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2008 Dec 13
0
track_dirty_vram(f0000000, 26) failed (-1, 3)
Occasionally I see the message '' track_dirty_vram(f0000000, 26) failed (-1, 3)'' in the qemu-dm logs as the DomU reboots. Is it a concern at all? Thanks James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2013 Aug 03
0
track_dirty_vram(f0000000, 160) failed (-1, 22)
...hen the VM (windows 2008) was started ,and you can still connect to the consolev via VNC but just get a static image.there is no network access to the VM.User domain is a web server on windows 2008.The last time it occurred I noticed in the /var/log/xen/qemu-xxxxxxxxx.log of the following message : track_dirty_vram(f0000000, 26) failed (-1, 3) can u help me resolve this issue.thank you! hiam Brs Telephone:18922958921 13480228533 E-mail:lizx@g-cloud.com.cn _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
2010 Apr 22
2
pci-attach - HOWTO
Hi, I tried to attach passrough io device to domU, the command (ended successfully in dom0), but when I entered the domU and typed the "lspci" command I didn''t see the new device, although the dom0 removed it from the "pci-list-assignable-devices". When I tried to detach it from the domU, the detach command returned with timeout error. What did I miss? perhaps I
2012 Nov 29
4
[PATCH] x86/hap: fix race condition between ENABLE_LOGDIRTY and track_dirty_vram hypercall
There is a race condition between XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY and HVMOP_track_dirty_vram hypercall. Although HVMOP_track_dirty_vram is called many times from qemu-dm which is connected via VNC, XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY is called only once from a migration process (e.g. xc_save, libxl-save-helper). So the race seldom happens, but the following cases are possible. =========...
2011 Sep 09
1
Problem with Windows on Xen
...ine. Both servers and the workstation are Debian Squeeze installed from bootstrap with almost identical installs. I have searched for similar situations on Google and the forums, but have come up empty handed. Only thing I can see from /var/log/xen/qemu-dm-rasfs.log is the last line says track_dirty_vram(f0000000, 26) failed (-1,3). Any help would be apprecitated on how to resolve this. Thank you -- Shane D. Johnson IT Administrator Rasmussen Equipment Workstation Server 1 Server 2 CPU Phenom 9950 Opteron 6174 Phenom 1100t MB Gigabyte ga-ma790x-ds4 Asus M4A78LT-M Asus KGPE-D1...
2008 Dec 09
1
Xen 3.3 Windows xp guest insufficient resources
...: addr=0x1f6d, val=0x0. gpe_en_write: addr=0x1f6e, val=0x0. gpe_en_write: addr=0x1f6f, val=0x0. gpe_en_write: addr=0x1f6c, val=0x8. gpe_en_write: addr=0x1f6d, val=0x0. gpe_en_write: addr=0x1f6e, val=0x0. gpe_en_write: addr=0x1f6f, val=0x0. reset requested in cpu_handle_ioreq. Issued domain 6 reboot track_dirty_vram(f0000000, 240) failed (-1, 3) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011 Nov 08
48
Need help with fixing the Xen waitqueue feature
The patch ''mem_event: use wait queue when ring is full'' I just sent out makes use of the waitqueue feature. There are two issues I get with the change applied: I think I got the logic right, and in my testing vcpu->pause_count drops to zero in p2m_mem_paging_resume(). But for some reason the vcpu does not make progress after the first wakeup. In my debugging there is one