search for: root_entry

Displaying 20 results from an estimated 20 matches for "root_entry".

2009 Aug 06
18
XCI: can we get to the demo state?
Hello XCI developers, I have a HP6930, downloaded xenclient from the git. And by following the instruction in HOWTO, I could get xenclient boot up fine. I try then to start a guest using xenvm.readme as template and nothing shows on the screen for the guest, although xenops shows 2 doms running. Can you point me to how to start a guest. And also, is the tree downloaded from git enough to arrive
2007 Dec 27
2
VT-d and the GPU
...= fff77000 (XEN) intel-iommu.c:724: iommu_page_fault:DMA Write: DEVICE 0:2.0 addr 7e8000002 (XEN) print_vtd_entries: domain_id = 0 bdf = 0:2:0 devfn = 10, gmfn = 7e800 (XEN) ---- print_vtd_entries 0 ---- (XEN) d->pgd = ffbce000 virt_to_maddr(hd->pgd) = bce000 (XEN) root_entry = ffbcb000 (XEN) root_entry[0] = bc5001 (XEN) maddr_to_virt(root_entry[0]) = ffbc5001 (XEN) ctxt_entry[10].lo == 0 (XEN) intel-iommu.c:1271:d0 domain_context_unmap_one_2:bdf = 0:2:0 (XEN) intel-iommu.c:1186:d0 domain_context_mapping:PCI: bdf = 0:2:0 (XEN) domctl.c:552:...
2008 Jun 05
1
VT-d warnings on Intel DQ35JO
...ge_fault: iommu->reg = ffff828bfff57000 (XEN) [VT-D]iommu.c:733: iommu_fault_status: Fault Overflow (XEN) [VT-D]iommu.c:720: iommu_fault:DMA Write: 0:2.0 addr f00000000 REASON 5 iommu->reg = ffff828bfff57000 (XEN) print_vtd_entries: iommu = ffff83007aefb480 bdf = 0:2:0 gmfn = f00000 (XEN) root_entry = ffff83007c06e000 (XEN) root_entry[0] = 7c18b001 (XEN) context = ffff83007c18b000 (XEN) context[10] = 101_7d1dc001 (XEN) l3 = ffff83007d1dc000 (XEN) l3_index = 3c (XEN) l3[3c] = 0 (XEN) l3[3c] not present (XEN) *** LOADING DOMAIN 0 *** Thanks, Neo -- I would remember...
2008 Sep 17
7
Megaraid SAS driver failing in Xen-3.3.0 but was working in Xen-3.2.2-rc3
On Xen-3.3.0, domain0 Megaraid SAS (SAS 1068 controller) driver is not loading correctly if vtd support in Xen is enabled. It fails at the point of initializing firmware. I wasn''t seeing this error with Xen-3.2.2-rc3 (Unstable version), though with vtd disabled in Xen-3.3.0, it is working. Looks like a degrade problem. Any clues? Thx, Venkat
2008 Oct 25
2
VTd - PCI Passthrough - VMError: fail to assign device
...ge_fault: iommu->reg = ffff828bfff57000 (XEN) [VT-D]iommu.c:744: iommu_fault_status: Fault Overflow (XEN) [VT-D]iommu.c:729: iommu_fault:DMA Write: 0:2.0 addr 200200000 REASON 5 iommu->reg = ffff828bfff57000 (XEN) print_vtd_entries: iommu = ffff8300bd6ad180 bdf = 0:2:0 gmfn = 200200 (XEN) root_entry = ffff8300bc9e0000 (XEN) root_entry[0] = b9cd6001 (XEN) context = ffff8300b9cd6000 (XEN) context[10] = 101_be4a6001 (XEN) l3 = ffff8300be4a6000 (XEN) l3_index = 8 (XEN) l3[8] = 0 (XEN) l3[8] not present (XEN) *** LOADING DOMAIN 0 *** (XEN) Xen kernel: 64-bit, lsb, comp...
2011 May 19
5
vcpu-pin cause dom0 kernel panic
I use xen 4.0(dom0 is suse11.sp1,2.6.32 x86_64) on Dell R710 with PERC H700 RAID adapter. --Intel(R) Xeon(R) CPU E5620 @ 2.40GHz --8 CPU cores. --Memory 64G --RAID5 4.5T When I dedicated (pin) a CPU core only for dom0 use. (I specify "dom0_max_vcpus=1 dom0_vcpus_pin" options for Xen) I got dom0 kernel panic error(pin-1-5.30.bmp) When I pin 2 core to dom0, the dom0 system can boot up,
2008 Aug 29
7
FC-HBA assigned to guest domain does not work.
..._fault_status: Primary Pending Fault (XEN) [VT-D]iommu.c:729: iommu_fault:DMA Write: 7:0.0 addr f3041000 REASON 5 ^^^^^^^^ iommu->reg = ffff828bfff58000 (XEN) print_vtd_entries: iommu = ffff8300d6e01780 bdf = 7:0:0 gmfn = f3041 (XEN) root_entry = ffff83021fdf2000 (XEN) root_entry[7] = 217de9001 (XEN) context = ffff830217de9000 (XEN) context[0] = 201_2155d1001 (XEN) l3 = ffff8302155d1000 (XEN) l3_index = 3 (XEN) l3[3] = 1d802a003 (XEN) l2 = ffff8301d802a000 (XEN) l2_index = 198 (XEN) l2[198] = 0 (XEN)...
2008 Dec 09
4
[VT-D]iommu.c:775: iommu_page_fault: iommu->reg = ffff828bfff57000
...ge_fault: iommu->reg = ffff828bfff57000 (XEN) [VT-D]iommu.c:744: iommu_fault_status: Fault Overflow (XEN) [VT-D]iommu.c:729: iommu_fault:DMA Write: 0:2.0 addr 594fd7000 REASON 5 iommu->reg = ffff828bfff57000 (XEN) print_vtd_entries: iommu = ffff8300ce2fb980 bdf = 0:2:0 gmfn = 594fd7 (XEN) root_entry = ffff83012bdf1000 (XEN) root_entry[0] = 1277ac001 (XEN) context = ffff8301277ac000 (XEN) context[10] = 101_12bdeb001 (XEN) l3 = ffff83012bdeb000 (XEN) l3_index = 16 (XEN) l3[16] = 0 (XEN) l3[16] not present (XEN) *** LOADING DOMAIN 0 *** (XEN) Xen kernel: 64-bit, lsb,...
2009 Aug 10
0
Vt-d not working with 3.4.1
...iommu_fault_status: Fault Overflow (XEN) [VT-D]iommu.c:694: iommu_fault_status: Primary Pending Fault (XEN) [VT-D]iommu.c:676: iommu_fault:DMA Write: 0:2.0 addr 400000000 REASON 5 iommu->reg = ffff828bfff56000 (XEN) print_vtd_entries: iommu = ffff83007d54a370 bdf = 0:2:0 gmfn = 400000 (XEN) root_entry = ffff83007d543000 (XEN) root_entry[0] = 795c6001 (XEN) context = ffff8300795c6000 (XEN) context[10] = 101_7d0ec001 (XEN) l3 = ffff83007d0ec000 (XEN) l3_index = 10 (XEN) l3[10] = 0 (XEN) l3[10] not present (XEN) *** LOADING DOMAIN 0 *** (XEN) Xen kernel: 64-bit, lsb, c...
2010 Oct 28
0
HVM + IGD Graphics + 4GB RAM = Soft Lockup
...mmu_fault_status: Primary Pending Fault (XEN) [VT-D]iommu.c:823: DMAR:[DMA Write] Request device [00:02.0] fault addr bf4aa000, iommu reg = ffff82c3fff56000 (XEN) DMAR:[fault reason 05h] PTE Write access is not set (XEN) print_vtd_entries: iommu = ffff830237cf88b0 bdf = 0:2.0 gmfn = bf4aa (XEN) root_entry = ffff830237ce4000 (XEN) root_entry[0] = 6f33001 (XEN) context = ffff830006f33000 (XEN) context[10] = 101_2259ec001 (XEN) l3 = ffff8302259ec000 (XEN) l3_index = 2 (XEN) l3[2] = 2259e9003 (XEN) l2 = ffff8302259e9000 (XEN) l2_index = 1fa (XEN) l2[1fa] = 2259dc003 (...
2010 Oct 28
0
HVM + IGD Graphics + 4GB RAM = Soft Lockup
...mmu_fault_status: Primary Pending Fault (XEN) [VT-D]iommu.c:823: DMAR:[DMA Write] Request device [00:02.0] fault addr bf4aa000, iommu reg = ffff82c3fff56000 (XEN) DMAR:[fault reason 05h] PTE Write access is not set (XEN) print_vtd_entries: iommu = ffff830237cf88b0 bdf = 0:2.0 gmfn = bf4aa (XEN) root_entry = ffff830237ce4000 (XEN) root_entry[0] = 6f33001 (XEN) context = ffff830006f33000 (XEN) context[10] = 101_2259ec001 (XEN) l3 = ffff8302259ec000 (XEN) l3_index = 2 (XEN) l3[2] = 2259e9003 (XEN) l2 = ffff8302259e9000 (XEN) l2_index = 1fa (XEN) l2[1fa] = 2259dc003 (...
2008 Nov 18
6
[PATCH] fix memory allocation from NUMA node for VT-d.
The memory relating guest domain should be allocated from NUMA node on which the guest runs. Because the latency of the same NUMA node is faster than that of a different one. This patch fixes memory allocation for Address Translation Structure of VT-d. VT-d uses two types of Structures for DMA address translation. The one is Device Assignment Structure. The other is Address Translation
2013 May 01
6
Bug#706543: xen-hypervisor-4.1-amd64: HVM PCI Passthrough not working any more after last upgrade
Package: xen-hypervisor-4.1-amd64 Version: 4.1.4-3 Severity: grave Justification: causes non-serious data loss -- System Information: Debian Release: 7.0 APT prefers testing APT policy: (500, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 3.2.0-4-amd64 (SMP w/4 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2010 Aug 15
24
Xen patches merged to upstream Linux 2.6.36, plans for 2.6.37?
Hello, It looks like upstream linux-2.6.git contains at least the following xen related new features for Linux 2.6.36: - Xen-SWIOTLB support (required for Xen PCI passthru and dom0) - Xen PV-on-HVM drivers - Xen VBD online dynamic resize of guest disks (xvd*) Congratulations! What are the plans for 2.6.37 merge window? I believe at least: - Xen PCI frontend Others? I''m going to