search for: root_entri

Displaying 20 results from an estimated 20 matches for "root_entri".

Did you mean: root_entry
2009 Aug 06
18
XCI: can we get to the demo state?
Hello XCI developers, I have a HP6930, downloaded xenclient from the git. And by following the instruction in HOWTO, I could get xenclient boot up fine. I try then to start a guest using xenvm.readme as template and nothing shows on the screen for the guest, although xenops shows 2 doms running. Can you point me to how to start a guest. And also, is the tree downloaded from git enough to arrive
2007 Dec 27
2
VT-d and the GPU
Like others on this list I am trying to employ the VTD-NEO patches in Xen 3.2 unstable to assign the internal graphics device to Dom1/Vista. I have removed the Cirrus Logic emulated device from qemu and replaced the Cirrus Logic vgabios with the the actual vgabios from my GPU. However I am hitting an xen assert and was hoping someone might be able to point me in the right direction. Below
2008 Jun 05
1
VT-d warnings on Intel DQ35JO
hi, This is what I get on the Intel DQ35JO. Is it critical? (XEN) Brought up 2 CPUs (XEN) [VT-D]iommu.c:1700: Queued Invalidation hardware not found (XEN) [VT-D]iommu.c:1700: Queued Invalidation hardware not found (XEN) [VT-D]iommu.c:1700: Queued Invalidation hardware not found (XEN) [VT-D]iommu.c:1700: Queued Invalidation hardware not found (XEN) [VT-D]iommu.c:1708: Interrupt Remapping hardware
2008 Sep 17
7
Megaraid SAS driver failing in Xen-3.3.0 but was working in Xen-3.2.2-rc3
On Xen-3.3.0, domain0 Megaraid SAS (SAS 1068 controller) driver is not loading correctly if vtd support in Xen is enabled. It fails at the point of initializing firmware. I wasn''t seeing this error with Xen-3.2.2-rc3 (Unstable version), though with vtd disabled in Xen-3.3.0, it is working. Looks like a degrade problem. Any clues? Thx, Venkat
2008 Oct 25
2
VTd - PCI Passthrough - VMError: fail to assign device
Dear Users, Debian Etch 2.6.18.8-xen from xensource.com with Xen 3.3.0 on AMD64: After i want to start the HVM i get: VmError: fail to assign device(1:0.0): maybe it has already been assigned to other domain, or maybe it doesn''t exist. [2008-10-25 19:59:30 2460] DEBUG (__init__:1072) XendDomainInfo.destroy: domid=5 [2008-10-25 19:59:30 2460] DEBUG (__init__:1072)
2011 May 19
5
vcpu-pin cause dom0 kernel panic
I use xen 4.0(dom0 is suse11.sp1,2.6.32 x86_64) on Dell R710 with PERC H700 RAID adapter. --Intel(R) Xeon(R) CPU E5620 @ 2.40GHz --8 CPU cores. --Memory 64G --RAID5 4.5T When I dedicated (pin) a CPU core only for dom0 use. (I specify "dom0_max_vcpus=1 dom0_vcpus_pin" options for Xen) I got dom0 kernel panic error(pin-1-5.30.bmp) When I pin 2 core to dom0, the dom0 system can boot up,
2008 Aug 29
7
FC-HBA assigned to guest domain does not work.
I assigned FC-HBA to guest domain, but it did not work. FC-HBA seems to write its internal memory which is mapped to host memory space via pci transaction. But there is no mapping in IOMMU''s page table, so that page fault occurs in IOMMU. I think that MMIO resource mapped via p2m table should be mapped via IOMMU''s page table too. In other word, XEN_DOMCTL_memory_mapping
2008 Dec 09
4
[VT-D]iommu.c:775: iommu_page_fault: iommu->reg = ffff828bfff57000
Hello, I have been working sometime now on getting a HVM accepting a PCI card from the host. As was said in the VT-D wiki I bought an ASUS P5E VM DO motherboard (rel 0803) which has the VT-D option in the bios. I want to pass a Hauppauge PVR 500 card to a virtual machine running LinuxMCE. After first trying XEN 3.2.1 which did not enable the "VT-D virtualisation" bit in xm dmesg, i
2009 Aug 10
0
Vt-d not working with 3.4.1
Hi folks, currently I try to setup a new xen host v3.4.1 on top of a Asus P5E-VM DO (latest BIOS, Vt-d capable and enabled in BIOS) to migrate my extisting HVMs (Win2k3 server) running on Xen v3.3.0 to a new home. I want to switch over to 3.4.1 to (hopefully!) passthrough my ISDN board to a HVM domU. Unfortunate there seem some issue with the VT-d DMAR tables which is beyond my knowledge and
2010 Oct 28
0
HVM + IGD Graphics + 4GB RAM = Soft Lockup
I''m having an issue forwarding through an Intel on-board graphics adapter. This is on a Dell Optiplex 780 with 8GB of RAM. The pass-through works perfectly fine if I have 2GB of RAM assigned to the HVM domU. If I try to assign 3GB or 4GB of RAM, I get the following on the console: [ 41.222073] br0: port 2(vif1.0) entering forwarding state [ 41.269854] (cdrom_add_media_watch()
2010 Oct 28
0
HVM + IGD Graphics + 4GB RAM = Soft Lockup
I''m having an issue forwarding through an Intel on-board graphics adapter. This is on a Dell Optiplex 780 with 8GB of RAM. The pass-through works perfectly fine if I have 2GB of RAM assigned to the HVM domU. If I try to assign 3GB or 4GB of RAM, I get the following on the console: [ 41.222073] br0: port 2(vif1.0) entering forwarding state [ 41.269854] (cdrom_add_media_watch()
2008 Nov 18
6
[PATCH] fix memory allocation from NUMA node for VT-d.
...) { dprintk(XENLOG_WARNING VTDPREFIX, diff -r 5fd51e1e9c79 xen/drivers/passthrough/vtd/iommu.c --- a/xen/drivers/passthrough/vtd/iommu.c Wed Nov 05 10:57:21 2008 +0000 +++ b/xen/drivers/passthrough/vtd/iommu.c Tue Nov 18 17:37:31 2008 +0900 @@ -148,7 +148,7 @@ root = &root_entries[bus]; if ( !root_present(*root) ) { - maddr = alloc_pgtable_maddr(); + maddr = alloc_pgtable_maddr(NULL); if ( maddr == 0 ) { unmap_vtd_domain_page(root_entries); @@ -205,7 +205,7 @@ addr &= (((u64)1) << addr_width) - 1; s...
2013 May 01
6
Bug#706543: xen-hypervisor-4.1-amd64: HVM PCI Passthrough not working any more after last upgrade
Package: xen-hypervisor-4.1-amd64 Version: 4.1.4-3 Severity: grave Justification: causes non-serious data loss -- System Information: Debian Release: 7.0 APT prefers testing APT policy: (500, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 3.2.0-4-amd64 (SMP w/4 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2010 Aug 15
24
Xen patches merged to upstream Linux 2.6.36, plans for 2.6.37?
Hello, It looks like upstream linux-2.6.git contains at least the following xen related new features for Linux 2.6.36: - Xen-SWIOTLB support (required for Xen PCI passthru and dom0) - Xen PV-on-HVM drivers - Xen VBD online dynamic resize of guest disks (xvd*) Congratulations! What are the plans for 2.6.37 merge window? I believe at least: - Xen PCI frontend Others? I''m going to