search for: irqbalanced

Displaying 20 results from an estimated 26 matches for "irqbalanced".

Did you mean: irqbalance
2007 Sep 11
0
irqbalanced on SMP dom0 ?
Hi listmembers, not a really urgent question, but i''m just curious about it: Is it advised to use an irqbalanced on dom0 when running domU''s pinned to particular cores? as an example, i''ve got a dual quadcore xen system running with domU pinned to cores 1-3 (CPU#0) domU pinned to cores 4-5 (CPU#1) domU pinned to cores 6-7 (CPU#1) so dom0 should have 100% time on core 0 (CPU#0). When loo...
2007 Jul 16
2
irqbalance?
Hi All, If I turn off irqbalance xen/arch/x86/irq.c, does that actually stop XEN from balancing irqs across different physical cpus, or will that override the setting if there are too many interrupts and one of the cpus is overloaded. Example: i have 4 cpus and i have configured irqbalance= off, so there is no irqbalancing done by xen. Now if i have affintized all my physical interrutps to one
2012 Jul 11
12
99% iowait on one core in 8 core processor
Hi All, We have a xen server and using 8 core processor. I can see that there is 99% iowait on only core 0. 02:28:49 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s 02:28:54 AM all 0.00 0.00 0.00 12.65 0.00 0.02 2.24 85.08 1359.88 02:28:54 AM 0 0.00 0.00 0.00 96.21 0.00 0.20 3.19 0.40 847.11 02:28:54 AM
2014 Aug 31
3
Bug#577788: dom0 kernels should suggest irqbalance
...or native with modern kernels, since the kernel doesn't do any balancing by itself (any more, it did use to). Looking on my laptop for instance I see that all interrupts are going to CPU0 out of the 4 processes. On the other hand my workstation does seem to have balanced IRQs despite having no irqbalanced running, so I don't know. I reckon the kernel probably should recommend irqbalance these days, but in any case there is no reason for Xen to do something different (since IRQ balancing should work as on native). Ian.
2014 Sep 03
0
Bug#577788: dom0 kernels should suggest irqbalance
...ful if there is a lot of work done in interrupt or softirq context (and have multiple processors). > Looking on my laptop for instance I see that all interrupts are going to > CPU0 out of the 4 processes. On the other hand my workstation does seem > to have balanced IRQs despite having no irqbalanced running, so I don't > know. > > I reckon the kernel probably should recommend irqbalance these days, but > in any case there is no reason for Xen to do something different (since > IRQ balancing should work as on native). At least kernels that support SMP could recommend it. B...
2010 Nov 05
2
i/o scheduler deadlocks with loopback devices
...1/4614107 Jeremy Fitzhardinge replied to that thread, indicating that his "xen: use percpu interrupts for IPIs and VIRQs" and "xen: handle events as edge-triggered" patches should fix the issue. These were introduced into 2.6.36-rc3, I believe, and the issue persists. Disabling irqbalanced in dom0, as he suggested as a workaround, has no effect. I''ve also tried changing the scheduler, and reducing the number of vcpus from 4 to 1, which also had no effect. Regards, Nathan Gamber _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensourc...
2014 May 14
2
2 PRI Card - Interrupt Problem
Hello All, I have 2 Digium card configure on Single machine, which can't share interrupt across all CPUs and sometimes asterisk reach 100% CPU usage. Here is system details and /proc/interrupt o/p. OS: CentOS 6.4 Kernel: 2.6.32-431.11.2.el6.x86_64 Dahdi Version: DAHDI Version: 2.7.0.2 Echo Canceller: HWEC Asterisk Version: 1.8.13.0 Output: /proc/interrupts cat /proc/interrupts
2008 Jul 31
0
[Xen-devel] State of Xen in upstream Linux
...#39;s losing interrupts. It's probably dropping interrupts if you migrate an irq beween vcpus while an event is pending. Shouldn't be too hard to fix. (In the meantime, the workaround is to make sure that you don't enable in-kernel irq balancing, and you don't run irqbalanced.) block device hotplug Hotplugging devices should work already, but I haven't really tested it. Need to make sure that both the in-kernel driver stuff works properly, and that udev events are raised properly, scripts run, device nodes added - and conversely for unplug. Also,...
2008 Jul 31
6
State of Xen in upstream Linux
...#39;s losing interrupts. It's probably dropping interrupts if you migrate an irq beween vcpus while an event is pending. Shouldn't be too hard to fix. (In the meantime, the workaround is to make sure that you don't enable in-kernel irq balancing, and you don't run irqbalanced.) block device hotplug Hotplugging devices should work already, but I haven't really tested it. Need to make sure that both the in-kernel driver stuff works properly, and that udev events are raised properly, scripts run, device nodes added - and conversely for unplug. Also,...
2008 Jul 31
6
State of Xen in upstream Linux
...#39;s losing interrupts. It's probably dropping interrupts if you migrate an irq beween vcpus while an event is pending. Shouldn't be too hard to fix. (In the meantime, the workaround is to make sure that you don't enable in-kernel irq balancing, and you don't run irqbalanced.) block device hotplug Hotplugging devices should work already, but I haven't really tested it. Need to make sure that both the in-kernel driver stuff works properly, and that udev events are raised properly, scripts run, device nodes added - and conversely for unplug. Also,...
2008 Jul 31
6
State of Xen in upstream Linux
...#39;s losing interrupts. It's probably dropping interrupts if you migrate an irq beween vcpus while an event is pending. Shouldn't be too hard to fix. (In the meantime, the workaround is to make sure that you don't enable in-kernel irq balancing, and you don't run irqbalanced.) block device hotplug Hotplugging devices should work already, but I haven't really tested it. Need to make sure that both the in-kernel driver stuff works properly, and that udev events are raised properly, scripts run, device nodes added - and conversely for unplug. Also,...
2008 Jul 31
6
State of Xen in upstream Linux
...#39;s losing interrupts. It's probably dropping interrupts if you migrate an irq beween vcpus while an event is pending. Shouldn't be too hard to fix. (In the meantime, the workaround is to make sure that you don't enable in-kernel irq balancing, and you don't run irqbalanced.) block device hotplug Hotplugging devices should work already, but I haven't really tested it. Need to make sure that both the in-kernel driver stuff works properly, and that udev events are raised properly, scripts run, device nodes added - and conversely for unplug. Also,...
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). The patches build on top of the new virtio APIs at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431; the new API simplifies the locking of the virtio-scsi driver nicely, thus it makes sense to require them as a prerequisite.
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). The patches build on top of the new virtio APIs at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431; the new API simplifies the locking of the virtio-scsi driver nicely, thus it makes sense to require them as a prerequisite.
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches. We hope this can go into virtio-next together with the virtio ring rework pathes. V5: improving the grammar of 1/5 (Paolo) move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches. We hope this can go into virtio-next together with the virtio ring rework pathes. V5: improving the grammar of 1/5 (Paolo) move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches. We hope this can go into virtio-next together with the virtio ring rework pathes. V4: rebase on virtio ring rework patches (rusty's pending-rebases branch) V3 and be found
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches. We hope this can go into virtio-next together with the virtio ring rework pathes. V4: rebase on virtio ring rework patches (rusty's pending-rebases branch) V3 and be found
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches, which has already gone into virtio-next today. We hope this can go into virtio-next together with the virtio ring rework pathes. V6: rework "redo allocation of target data"
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives performance improvements of up to 50% (measured both with QEMU and tcm_vhost backends). This version rebased on Rusty's virtio ring rework patches, which has already gone into virtio-next today. We hope this can go into virtio-next together with the virtio ring rework pathes. V6: rework "redo allocation of target data"