search for: cpu_idlers

Displaying 20 results from an estimated 160 matches for "cpu_idlers".

Did you mean: cpu_idle
2012 Mar 01
3
[PATCH v2] x86: Use deep C states for off-lined CPUs
# HG changeset patch # User Boris Ostrovsky <boris.ostrovsky@amd.com> # Date 1330642361 -3600 # Node ID 99df5c6b2964ceaa73651d7bc02fb1ae820f7691 # Parent a7bacdc5449a2f7bb9c35b2a1334b463fe9f29a9 x86: Use deep C states for off-lined CPUs Currently when a core is taken off-line it is placed in C1 state (unless MONITOR/MWAIT is used). This patch allows a core to go to deeper C states
2013 Jul 15
1
[PATCH] xen/cpuidle: Reduce logging level for unknown apic_ids
Dom0 uses this hypercall to pass ACPI information to Xen. It is not very uncommon for more cpus to be listed in the ACPI tables than are present on the system, particularly on systems with a common BIOS for a 2 and 4 socket server varients. As Dom0 does not control the number of entries in the ACPI tables, and is required to pass everything it finds to Xen, reduce the ERR to an INFO.
2010 Apr 23
0
vmcore on 5.4
Information: 5.4 kernel (2.6.18-164.el5). I have a vmcore (from kdump), if the developers are interested, let me know a place to upload the vmcore file. I used the crash command to do a backtrace. I manage to get machines with later 5.4 and 5.5 to panic the same way. Broadcom or Intel NICs panic the same way. This is an NFS client where the NFS server is restarting several times; NFSv3, mount
2006 Dec 08
2
Lots of "swapper: page allocation failure" and other memory related messages - 2.6.16-xen0
(please keep me on Cc when replying) I have a server running Xen that regularly spews the following. The box seems to survive fine regardless - just thought I''d let everyone know. Dec 8 12:19:26 server kernel: 0x47/0x7a Dec 8 12:19:26 server kernel: [alloc_skb_from_cache+70/243] alloc_skb_from_cache+0x46/0xf3 Dec 8 12:19:26 server kernel: [__dev_alloc_skb+70/92]
2011 Feb 23
0
[PATCH] Fixing mwait usage when doing cpu offline
Hi, Keir, In debugging the issue "system hang when doing cpu offline", I identified a situation that could cause a dead lock. The scenario is: mwait_idle_with_hint inside play_dead will access per cpu variable, which causes #PF. The #PF handler will use printk, which will schedule a tasklet. In scheduling a tasklet, per cpu variables are needed, otherwise, there will be another #PF.
2005 Nov 30
2
kernel panic this morning
When I came into work this morning the console had the following: bio_endio __end_that_request_first scsi_end_request scsi_io_completion scsi_finish_command scsi_softirq __do_softirq do_softirg do_irq command_interrupt default_idle cpu_idle start_kernel Panic I am running software RAID-1 on this machine. I had 2 IDE disks and the machine when down once in a while with similiar messages. I
2011 Feb 11
2
Not able to capture detailed CPU information of the guest machine using Libvirt API.
Hi , I have two KVM guests in ubuntu host machine.I am using Python binding of Libvirt API to query on the hypervisor and capture the CPU , memory related information of the guest machines. I need to capture the detail information regarding CPU like : cpu_aidle, cpu_idle, cpu_speed, cpu_wio and memory like :mem_cached,mem_buffers,mem_free etc. of the guest machines. How could I get these
2007 Jun 13
2
HTB deadlock
Greetings, I''ve been experiencing problems with HTB where the whole machine locks up. This usually happens when the whole qdisc is being removed and occasionally when a leaf is being removed. Common is that it always happens when some sort of removal is in progress. Console output I have captured is at the end of this message. The same behavior exists from vanilla 2.6.19.7 and above.
2013 Nov 11
1
[PATCH] x86/idle: reduce contention on ACPI register accesses
Other than when they''re located in I/O port space, accessing them when in MMIO space (currently) implies usage of some sort of global lock: In -unstable this would be due to the use of vmap(), is older trees the necessary locking was introduced by 2ee9cbf9 ("ACPI: fix acpi_os_map_memory()"). This contention was observed to result in Dom0 kernel soft lockups during the loading of
2012 Nov 02
4
[PATCH] ACPI/cpuidle: remove unused "power" field from Cx state data
It has never been used for anything, and Linux 3.7 doesn''t propagate this information anymore. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- Konrad, on the pv-ops side it may be better to pass zero rather than leaving the field completely uninitialized. --- a/xen/arch/x86/acpi/cpu_idle.c +++ b/xen/arch/x86/acpi/cpu_idle.c @@ -935,7 +935,6 @@ static void set_cx( }
2011 Sep 01
3
DOM0 Hang on a large box....
Hi, I''m looking at a system hang on a large box: 160 cpus, 2TB. Dom0 is booted with 160 vcpus (don''t ask me why :)), and an HVM guest is started with over 1.5T RAM and 128 vcpus. The system hangs without much activity after couple hours. Xen 4.0.2 and 2.6.32 based 64bit dom0. During hang I discovered: Most of dom0 vcpus are in double_lock_balance spinning on one of the locks:
2012 Dec 13
7
HVM bug: system crashes after offline online a vcpu
Hi Konrad I encountered a bug when trying to bring offline a cpu then online it again in HVM. As I''m not very familiar with HVM stuffs I cannot come up with a quick fix. The HVM DomU is configured with 4 vcpus. After booting into command prompt, I do following operations. # echo 0 > /sys/devices/system/cpu/cpu3/online # echo 1 > /sys/devices/system/cpu/cpu3/online With
2010 Mar 09
4
"monitor"-ed address and IPI reduction
What is the point of specifying "current" as the address to monitor? The memory location of interest really is irq_stat[cpu].__softirq_pending, and if that was used it would then also be possible to actually avoid sending IPIs when monitor/mwait are in use, as is being done on Linux. Jan _______________________________________________ Xen-devel mailing list
2011 Oct 25
5
[PATCH] pm : provide CC7/PC2 residency
x86 pm : provide CC7/PC2 residency Sandy bridge introduces new MSR to get cc7/pc2 residency (core C-state 7/package C-state 2). Print the cc7/pc2 residency when on sandy bridge platform. Signed-off-by: Yang Zhang <yang.z.zhang@intel.com> diff -r 662dbf6ee71c tools/libxc/xc_pm.c --- a/tools/libxc/xc_pm.c Mon Oct 24 18:01:07 2011 +0100 +++ b/tools/libxc/xc_pm.c Fri Oct 28
2009 Apr 18
2
libata-core kernel errors
This is a repost of sorts, and for that I am sorry; I do not think my original posting subject was very clear, and I have more data about the problem. I'm experiencing lots of kernel errors when reading or writing to a disk that is part of an mdadm softraid-5 array. Since originally detecting this problem, I have isolated it to one disk, but I'm not sure what the cause of the error is. I
2005 Dec 05
11
Xen 3.0 and Hyperthreading an issue?
Just gave 3.0 a spin. Had been running 2.0.7 for the past 3 months or so without problems (aside from intermittent failure during live migration). Anyway, 3.0 seems to have an issue with my machine. It starts up the 4 domains that I''ve got defined (was running 6 user domains with 2.0.7, but two of those were running 2.4 kernels which I can''t seem to build with Xen 3.0 yet, and
2013 Jun 03
0
[PATCH] xen/smp: Fixup NOHZ per cpu data when onlining an offline CPU.
The xen_play_dead is an undead function. When the vCPU is told to offline it ends up calling xen_play_dead wherin it calls the VCPUOP_down hypercall which offlines the vCPU. However, when the vCPU is onlined back, it resumes execution right after VCPUOP_down hypercall. That was OK (albeit the API for play_dead assumes that the CPU stays dead and never returns) but with commit 4b0c0f294 (tick:
2008 Feb 05
1
[PATCH] virtio_net: Fix open <-> interrupt race
I got the following oops during interface ifup. Unfortunately its not easily reproducable so I cant say for sure that my fix fixes this problem, but I am confident and I think its correct anyway: <2>kernel BUG at /space/kvm/drivers/virtio/virtio_ring.c:234! <4>illegal operation: 0001 [#1] PREEMPT SMP <4>Modules linked in: <4>CPU: 0 Not tainted
2008 Feb 05
1
[PATCH] virtio_net: Fix open <-> interrupt race
I got the following oops during interface ifup. Unfortunately its not easily reproducable so I cant say for sure that my fix fixes this problem, but I am confident and I think its correct anyway: <2>kernel BUG at /space/kvm/drivers/virtio/virtio_ring.c:234! <4>illegal operation: 0001 [#1] PREEMPT SMP <4>Modules linked in: <4>CPU: 0 Not tainted
2009 Nov 08
9
2.6.31 xenified kernel - not ready for production
Hi, I just want to know if somebody use 2.6.31.4 xenified kernel (aka OpenSUSE) in production? We have been testing it on new Nehalem Xeon server for few weeks w/o any problem. But as soon we tried it on production machine - after several production domUs started - hard OS failure. We had to switch back to 2.6.18.8 - xen stock kernel. Peter _______________________________________________