Displaying 20 results from an estimated 26 matches for "irqbalancing".
2007 Sep 11
0
irqbalanced on SMP dom0 ?
Hi listmembers,
not a really urgent question, but i''m just curious about it:
Is it advised to use an irqbalanced on dom0 when running
domU''s pinned to particular cores?
as an example, i''ve got a dual quadcore xen system running
with
domU pinned to cores 1-3 (CPU#0)
domU pinned to cores 4-5 (CPU#1)
domU pinned to cores 6-7 (CPU#1)
so dom0 should have 100% time on
2007 Jul 16
2
irqbalance?
...turn off irqbalance xen/arch/x86/irq.c, does that actually stop XEN from balancing irqs across different physical cpus, or will that override the setting if there are too many interrupts and one of the cpus is overloaded.
Example: i have 4 cpus and i have configured irqbalance= off, so there is no irqbalancing done by xen.
Now if i have affintized all my physical interrutps to one cpu say cpu1 all the interrutps shpuld be handled by the cpu1 and so should all the softirqs generated.
Now what happens if one of the other cpus is lightly loaded, will some of the softirqs be queued against the other cpus or...
2012 Jul 11
12
99% iowait on one core in 8 core processor
Hi All,
We have a xen server and using 8 core processor.
I can see that there is 99% iowait on only core 0.
02:28:49 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
02:28:54 AM all 0.00 0.00 0.00 12.65 0.00 0.02 2.24 85.08 1359.88
02:28:54 AM 0 0.00 0.00 0.00 96.21 0.00 0.20 3.19 0.40 847.11
02:28:54 AM
2014 Aug 31
3
Bug#577788: dom0 kernels should suggest irqbalance
(copying debian-kernel for reasons which will hopefully become obvious)
On Mon, 8 Jul 2013 18:10:58 +0200 =?UTF-8?Q?Moritz_M=C3=BChlenhoff?= <jmm at inutil.org> wrote:
> In current Debian kernel there's no special Xen dom0 kernel image and depending
> on irqbalance in the kernel package would be overkill.
Would it? I thought irqbalance is actually required even for native with
2014 Sep 03
0
Bug#577788: dom0 kernels should suggest irqbalance
On Sun, 2014-08-31 at 03:10 +0100, Ian Campbell wrote:
> (copying debian-kernel for reasons which will hopefully become obvious)
>
> On Mon, 8 Jul 2013 18:10:58 +0200 =?UTF-8?Q?Moritz_M=C3=BChlenhoff?= <jmm at inutil.org> wrote:
> > In current Debian kernel there's no special Xen dom0 kernel image and depending
> > on irqbalance in the kernel package would be
2010 Nov 05
2
i/o scheduler deadlocks with loopback devices
This was an email I sent to xen-devel a while ago without getting a
response. I''m reposting it here in case someone knows more.
Hello all,
I''m able to consistently reproduce lockups in my domU with heavy I/O
with the following error:
36841.420662] INFO: task rsyslogd:15014
blocked for more than 120 seconds. [36841.420843] "echo 0>
2014 May 14
2
2 PRI Card - Interrupt Problem
Hello All,
I have 2 Digium card configure on Single machine, which can't share
interrupt across all CPUs and sometimes asterisk reach 100% CPU usage. Here
is system details and /proc/interrupt o/p.
OS: CentOS 6.4
Kernel: 2.6.32-431.11.2.el6.x86_64
Dahdi Version: DAHDI Version: 2.7.0.2 Echo Canceller: HWEC
Asterisk Version: 1.8.13.0
Output: /proc/interrupts
cat /proc/interrupts
2008 Jul 31
0
[Xen-devel] State of Xen in upstream Linux
----- Forwarded message from Jeremy Fitzhardinge <jeremy at goop.org> -----
From: Jeremy Fitzhardinge <jeremy at goop.org>
To: Xen-devel <xen-devel at lists.xensource.com>,
xen-users at lists.xensource.com,
Virtualization Mailing List <virtualization at lists.osdl.org>
Cc:
Date: Wed, 30 Jul 2008 17:51:37 -0700
Subject: [Xen-devel] State of Xen in upstream Linux
Well,
2008 Jul 31
6
State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an
update about what's new with Xen. I'm trying to aim this at both the
user and developer audiences, so bear with me if I seem to be waffling
about something irrelevant.
2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small
issues fixed up. Feature-wise, it supports 32-bit domU with the core
devices
2008 Jul 31
6
State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an
update about what's new with Xen. I'm trying to aim this at both the
user and developer audiences, so bear with me if I seem to be waffling
about something irrelevant.
2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small
issues fixed up. Feature-wise, it supports 32-bit domU with the core
devices
2008 Jul 31
6
State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an
update about what's new with Xen. I'm trying to aim this at both the
user and developer audiences, so bear with me if I seem to be waffling
about something irrelevant.
2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small
issues fixed up. Feature-wise, it supports 32-bit domU with the core
devices
2008 Jul 31
6
State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an
update about what's new with Xen. I'm trying to aim this at both the
user and developer audiences, so bear with me if I seem to be waffling
about something irrelevant.
2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small
issues fixed up. Feature-wise, it supports 32-bit domU with the core
devices
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends). The patches build on top of the new virtio APIs
at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431;
the new API simplifies the locking of the virtio-scsi driver nicely,
thus it makes sense to require them as a prerequisite.
2013 Feb 12
6
[PATCH v3 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends). The patches build on top of the new virtio APIs
at http://permalink.gmane.org/gmane.linux.kernel.virtualization/18431;
the new API simplifies the locking of the virtio-scsi driver nicely,
thus it makes sense to require them as a prerequisite.
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V5: improving the grammar of 1/5 (Paolo)
move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 19
6
[PATCH V5 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V5: improving the grammar of 1/5 (Paolo)
move the dropping of sg_elems to 'virtio-scsi: use
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V4: rebase on virtio ring rework patches (rusty's pending-rebases branch)
V3 and be found
2013 Mar 11
7
[PATCH V4 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V4: rebase on virtio ring rework patches (rusty's pending-rebases branch)
V3 and be found
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V6: rework "redo allocation of target data"
2013 Mar 20
7
[PATCH V6 0/5] virtio-scsi multiqueue
This series implements virtio-scsi queue steering, which gives
performance improvements of up to 50% (measured both with QEMU and
tcm_vhost backends).
This version rebased on Rusty's virtio ring rework patches, which
has already gone into virtio-next today.
We hope this can go into virtio-next together with the virtio ring
rework pathes.
V6: rework "redo allocation of target data"