Displaying 5 results from an estimated 5 matches for "xendevid".
Did you mean:
xendev
2009 Feb 04
3
unable to assign ip from config file
Hi,
im using a fedora core 8 domU and fedora core 8 as my dom0 on xen3.1.0-13,
my config file reads:
kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
ramdisk="/boot/initrd-2.6.21-2950.fc8xen-no-scsi.img"
memory = 428
name = "fedora1.fc8"
vif = [ ''mac=00:16:3e:00:00:03,ip=192.168.2.105'' ]
dhcp = "off"
netmask = "255.255.255.0"
gateway
2007 Nov 17
1
Segmentation Faults on DomU
Hi All,
I am seeing strange segmentation faults on DomU after I install gcc (
or for that matter any s/w with yum).
I am using PV linux images.
- I am using the source for xen 3.0.4 from Xensource and I built Xen +
Dom0 from source, and booted Dom0.
- I then downloaded the fedora6 file system images from jailtime.org
and booted the DomU using the kernel from the step above.
- When I run
2008 Jul 22
0
Duplicate IRQ problem with PCI Passthrough
Hi All,
I have been unable to find a solution to this problem. There are others on
the list who have had this problem, but the list doesnt have a solution
AFAIK.
I am doing a PCI passthrough and end up with IRQ 16 shared betn Dom0''s NIC
and DomUs NIC.
When I load the system with some n/w IO then there is a problem and IRQ 16
gets disabled and hence networking stops completely.
Any
2008 Jul 29
0
xenoprof bug ?
Hi,
I ran xenoprof with the event mask LLC_MISSES:10000. This should be L2 cache
misses on Core2 arch.
After the run when I run opreport it displays :
Counted *L2_RQSTS* events (number of L2 cache requests) with a unit mask of
0x41 (multiple flags) count 10000
LLC_MISSES:10000 | .....
My questions was whether the message is incorrect or the event mask is not
being set correctly by oprofile.
My
2007 Oct 29
4
Avoiding VmEntry/VmExit.
Hi All,
I am trying to provide services to guest VMs where I wish to run guest VMs
in a loop.
I wish to use a core to schedule a guest VM, service it eg. execute an ISR
etc and then return to the context of Xen on that core, so that I can then
schedule the next VM on that core.
In doing all this, the goal is to avoid the calls to VMEntry and VMExit. Is
there a workaround for this to be done or