search for: dompage

Displaying 4 results from an estimated 4 matches for "dompage".

Did you mean: dommage
2009 Apr 06
5
Config to set CPU affinity and distribute interrupts
...handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d (XEN) Rangesets belonging to domain 0: (XEN) Interrupts { 0-255 } (XEN) I/O Memory { 0-febff, fec01-fedff, fee01-ffffffff } (XEN) I/O Ports { 0-1f, 22-3f, 44-60, 62-9f, a2-cfb, d00-ffff } (XEN) Memory pages belonging to domain 0: (XEN) DomPage list too long to display (XEN) XenPage 00000bed: caf=80000002, taf=e8000002 (XEN) XenPage 00000bec: caf=80000001, taf=e8000001 (XEN) XenPage 00000beb: caf=80000001, taf=e8000001 (XEN) XenPage 00000bea: caf=80000001, taf=e8000001 (XEN) XenPage 00000be9: caf=80000002, taf=e8000002 (XEN) VCPU informat...
2011 Nov 08
48
Need help with fixing the Xen waitqueue feature
The patch ''mem_event: use wait queue when ring is full'' I just sent out makes use of the waitqueue feature. There are two issues I get with the change applied: I think I got the logic right, and in my testing vcpu->pause_count drops to zero in p2m_mem_paging_resume(). But for some reason the vcpu does not make progress after the first wakeup. In my debugging there is one
2008 May 19
21
[PATCH 0/5] VT-d support for PV guests
...llows: xen-vtd-unmap.patch --- Make the VT-d iommu_unmap_page() code actually do something close to useful. xen-ptab-dump.path --- There''s no point in using ''current'' when an IOMMU page fault is raised. Also, add some page type statistics for DomPage debug output. xen-iommu-pv.patch --- Add support for iommu_pv_enable boot parameter and IOMMU assignment of PCI devices to guests. xen-iommu-pv-mappings.patch --- Hook iommu_{un}map_page() calls into various Xen locations. xen-pv-assign.patch --- Allow PCI devices to be as...
2012 Oct 17
28
Xen PVM: Strange lockups when running PostgreSQL load
I am currently looking at a bug report[1] which is happening when a Xen PVM guest with multiple VCPUs is running a high IO database load (a test script is available in the bug report). In experimenting it seems that this happens (or is getting more likely) when the number of VCPUs is 8 or higher (though I have not tried 6, only 2 and 4), having autogroup enabled seems to make it more likely, too