search for: romley

Displaying 8 results from an estimated 8 matches for "romley".

Did you mean: romlen
2011 Feb 11
4
Xen hypervisor failed to startup when booting CPUs
...N) 00000000fed1c000 - 00000000fed20000 (reserved) (XEN) 00000000fee00000 - 00000000fee01000 (reserved) (XEN) 00000000ffc00000 - 0000000100000000 (reserved) (XEN) 0000000100000000 - 0000000840000000 (usable) (XEN) ACPI: RSDP 000F0410, 0024 (r2 INTEL) (XEN) ACPI: XSDT BEEC8E18, 008C (r1 INTEL ROMLEY 6222004 INTL 20090903) (XEN) ACPI: FACP BEEC7D98, 00F4 (r4 INTEL ROMLEY 6222004 INTL 20090903) (XEN) ACPI: DSDT BEEAF018, 1660E (r2 INTEL ROMLEY 27 INTL 20100331) (XEN) ACPI: FACS BEEC8D40, 0040 (XEN) ACPI: APIC BEEC6718, 066A (r3 INTEL ROMLEY 6222004 INTL 20090903) (XEN) ACPI: SP...
2012 Jun 24
1
Problems with Powerware 5115 on Patsburge USB
I am seeing some problems when using the Powerware 5115 UPS when connected to the Patsburg USB controller and would like to know if anyone has a solution. The symptom is that PING messages sent to the bcmxcp_usb driver take as long as 25 seconds to complete. The configuration is a Romley patform (a.k.a., Sandy Bridge + Patsburg) running Debian 6 with 2.6.39 kernel with NUT 2.6.4 in which the Powerware is connected via USB 2.0 to the Patsburg. Of note is that the following messages are logged in syslog frequently. 2012-06-21T09:03:10.754413+00:00 (none) kernel: [ 8659.067243] usb 2...
2013 Mar 12
14
vpmu=1 and running 'perf top' within a PVHVM guest eventually hangs dom0 and hypervisor has stuck vCPUS. Romley-EP (model=45, stepping=2)
This issue I am encountering seems to only happen on multi-socket machines. It also does not help that the only multi-socket box I have is an Romley-EP (so two socket SandyBridge CPUs). The other SandyBridge boxes I''ve (one socket) are not showing this. Granted they are also a different model (42). The problem is that when I run ''perf top'' within an SMP PVHVM guest, after a couple of seconds or minutes the guest hang...
2012 Jul 04
13
[PATCH 0/6] tcm_vhost/virtio-scsi WIP code for-3.6
From: Nicholas Bellinger <nab at linux-iscsi.org> Hi folks, This series contains patches required to update tcm_vhost <-> virtio-scsi connected hosts <-> guests to run on v3.5-rc2 mainline code. This series is available on top of target-pending/auto-next here: git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git
2012 Jul 04
13
[PATCH 0/6] tcm_vhost/virtio-scsi WIP code for-3.6
From: Nicholas Bellinger <nab at linux-iscsi.org> Hi folks, This series contains patches required to update tcm_vhost <-> virtio-scsi connected hosts <-> guests to run on v3.5-rc2 mainline code. This series is available on top of target-pending/auto-next here: git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git
2013 Jun 04
12
[PATCH 0/4] XSA-52..54 follow-up
The first patch really isn''t as much of a follow-up than what triggered the security issues to be noticed in the first place. 1: x86: preserve FPU selectors for 32-bit guest code 2: x86: fix XCR0 handling 3: x86/xsave: adjust state management 4: x86/fxsave: bring in line with recent xsave adjustments The first two I would see as candidates for 4.3 (as well as subsequent
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...CSI ports on bare-metal produces ~1M random IOPs with 12x LUNs + numjobs=32. At numjobs=16 here with vhost the 16x LUN configuration ends up being in the range of ~310K IOPs for the current sweet spot.. Here is a more detailed breakdown of the test setup: - host hardware: *) Dual Xeon-E5-2687W (Romley-EP) 3.10 Ghz w/ 32x threads + 32 GB of DDR3 1600Mhz memory - host kernel: *) Using 3.6-rc0 from target-pending/for-linus *) qemu vhost-scsi from nab's qemu-kvm.git/vhost-scsi on k.o *) Set QEMU vCPU process affinity to dedicated cpus based on 'info cpus' (as recommen...
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...CSI ports on bare-metal produces ~1M random IOPs with 12x LUNs + numjobs=32. At numjobs=16 here with vhost the 16x LUN configuration ends up being in the range of ~310K IOPs for the current sweet spot.. Here is a more detailed breakdown of the test setup: - host hardware: *) Dual Xeon-E5-2687W (Romley-EP) 3.10 Ghz w/ 32x threads + 32 GB of DDR3 1600Mhz memory - host kernel: *) Using 3.6-rc0 from target-pending/for-linus *) qemu vhost-scsi from nab's qemu-kvm.git/vhost-scsi on k.o *) Set QEMU vCPU process affinity to dedicated cpus based on 'info cpus' (as recommen...