Displaying 20 results from an estimated 30 matches for "smp_affin".
2012 Jul 11
12
99% iowait on one core in 8 core processor
Hi All,
We have a xen server and using 8 core processor.
I can see that there is 99% iowait on only core 0.
02:28:49 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
02:28:54 AM all 0.00 0.00 0.00 12.65 0.00 0.02 2.24 85.08 1359.88
02:28:54 AM 0 0.00 0.00 0.00 96.21 0.00 0.20 3.19 0.40 847.11
02:28:54 AM
2006 Mar 28
2
Asterisk & SMP: Is irqbalance Redundant on 2.6 Kernels?
...ere is heavy network activity, so if the IRQs are not
balanced the server will be CPU bound by the lone processor assigned to
handle them.
Initially, I solved this problem by adding the following line to
"/etc/rc.local" (82 is the IRQ of the ethernet device):
echo 0f > /proc/irq/82/smp_affinity
This worked like a charm. The IRQs from the ethernet device were
balanced across the four processors and each of their idle times was
comparable. Unfortunately, when I duplicated this on our backup machine
(adjusting the IRQ accordingly), I discovered that something was
overwriting the value...
2010 Sep 13
1
irq 58 nobody cared.
...ult_idle+0x29/0x50
[<ffffffff8004923a>] cpu_idle+0x95/0xb8
[<ffffffff8007796f>] start_secondary+0x498/0x4a7
handlers:
[<ffffffff801f74cf>] (usb_hcd_irq+0x0/0x55)
Disabling IRQ #58
Looking at /proc/irq/58, it apepars to be USB related:
/proc/irq/58/ehci_hcd:usb2
/proc/irq/58/smp_affinity
cat /proc/irq/58/smp_affinity
cat /proc/irq/58/smp_affinity
00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000002
This is an Asus M4N75TD main board with 4GB non-ECC RAM, and AMD
Phenom(tm) II X4 925 Processor. It has an ancient ATI Rage PCI
graphics board, booting into ini...
2007 Oct 24
7
Compatibility Issues with dell poweredge 1950 and TE110P card
Has anyone had any compatibility issues with a TE110P card installed on a
Dell Poweredge 1950? I noted the following error on the LCD display of the
Dell Poweredge 1950:
E1711 PCI PErr Slot 1 E171F PCIE Fatal Error B0 D4 F0.
The Dell hardware owners manual states that it means the system BIOS has
reported a PCI parity error on a component that resides in PCI configuration
space at bus 0,
2013 Feb 26
4
passthroughed msix device
...pci-assignable-add
0000:1f:0.0
2.xm cr -c vm.cfg
[root@rac10box2 ~]# cat /proc/interrupts |grep mpt2sas0
48: 340449 0 0 0 0 0
0 0 0 0 0 0 PCI-MSI-edge
mpt2sas0-msix0
[root@rac10box2 ~]# cat /proc/irq/48/smp_affinity
0fff
[root@rac10box2 ~]# echo 2>/proc/irq/48/smp_affinity
[root@rac10box2 ~]# cat /proc/irq/48/smp_affinity
0002
[root@rac10box2 ~]# cat /proc/interrupts |grep mpt
48: 342051 0 0 0 0 0
0 0 0 0 0 0 PCI-MSI-e...
2019 Jun 04
1
centos7.6 nvme support
Hello:
I create a centos7.6 virtual machine which use 3.10.0-957 kernel?using
pci passthrough to support nvme device? and I found interrupt can
not distribute to target CPU which in smp_affinity set?the problem solved
when I update kernel to 5.0.20,is anyone can tell me how to solve this
problem without updating kernel??
thanks advance!!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos-virt/attachments/20190604...
2006 Feb 09
1
Optimizing Linux to run Asterisk
Could anyone either recommend a website or howto on optimizing Linux to
run asterisk. Such examples of what I mean are..
Renice of asterisk pid's
Forcing irq smp_affinity (For interupt hogging T1 cards)
.. That kind of stuff, I looked on the wiki and nothing directly
mentions server optimization. Or, is this something that *should* be
totally irrelevent when dealing with Asterisk.
P.S. I don't mean obvious things such as limiting I/O (ie: turn off
debug lo...
2013 May 25
1
Question regarding dropped packets
...y. I
see the packets coming in on eth0, the bridge, and sometimes on the vif
device but the domU itself doesn''t see these packets.
Has anyone seen this type of behaviour before? I''ve tried all the
following with little success:
- increasing txqueuelen
- changing the smp_affinity settings for our network devces
- dedicating the first 2 cores to dom0 and letting the domUs use
the remaining 14.
If anyone has any ideas or additional things I can try, I would be most
grateful :)
Thanks,
Jeff
2010 May 16
1
Digium TE121P + DAHDI
...ere we
can hear ourselves speaking. The other party doesn't hear this however and
its perfectly clean.
After solving some of the IRQ/processing issues in which we seemed to get
HDLC errors pop up after a week and take down the connection completely
(only solvable with a reboot); we changed the smp_affinity settings for all
devices to basically run on their own core.
This seemed to have solved us getting extended IRQ misses and increase in
delays. However it hadn't fixed the echo issue.
After some digging on this it seems that there could be a problem with DAHDI
and I want to get to the bottom...
2007 Jan 24
2
Thoughput
Hi,
I am after a feel of the throughput capabilities for TC and Iptables in
comparison to dedicated hardware. I have heard talk about 1Gb+ throughput
with minimal performance impact using 50ish TC rules and 100+ Iptables
rules.
Is there anyone here running large throughput / large configurations, and if
so, what sort of figures?
Regards
Dan
2006 Jun 02
2
Audio problems on Zap & SIP, local network, not IRQ related?
I am trying to get to the bottom of audio clicks, pops, dropouts with my
Asterisk server. These occur even when the system is under minimal load
(e.g. 1 Zap device in a queue being played music on hold) and occurs with
both Zap and Sip devices so isn't network related. The audio problems occur
at the same time on all channels and seems to be when Asterisk "gets busy"
and uses
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...l vqs, and the above throughput
>> data has been very close to same fio test in host side with single
>> job. So more improvement should be observed once more IOthreads are
>> used for handling requests from multi vqs.
>>
>> TODO:
>> - adjust vq's irq smp_affinity according to blk-mq hw queue's cpumask
>>
>> V3:
>> - fix use-after-free on vq->name reported by Michael
>>
>> V2: (suggestions from Michael and Dave Chinner)
>> - allocate virtqueues' pointers dynamically
>> - make sur...
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...l vqs, and the above throughput
>> data has been very close to same fio test in host side with single
>> job. So more improvement should be observed once more IOthreads are
>> used for handling requests from multi vqs.
>>
>> TODO:
>> - adjust vq's irq smp_affinity according to blk-mq hw queue's cpumask
>>
>> V3:
>> - fix use-after-free on vq->name reported by Michael
>>
>> V2: (suggestions from Michael and Dave Chinner)
>> - allocate virtqueues' pointers dynamically
>> - make sur...
2009 Sep 11
1
Quo vadis?
Hi,
In the course of my experiments with rt kernels
it so happens that the gui versions of tuna built
from upstream (RHEL, F11) SRPMs show wrong affinity settings
for IRQ threads.
Where would be a proper place to report this
and get some help on other rt-related issues?
Thanks,
Sasha
2015 Aug 10
0
managedsave/start causes IRQ and task blocked for more than 120 seconds errors
...dmesg|grep "IRQ 21"
[ 1.141040] ACPI: PCI Interrupt Link [GSIF] enabled at IRQ 21
$ ls -lA /proc/irq/21
total 0
-r--r--r-- 1 root root 0 Aug 10 19:42 affinity_hint
-r--r--r-- 1 root root 0 Aug 10 19:42 node
dr-xr-xr-x 2 root root 0 Aug 10 19:42 qxl
-rw-r--r-- 1 root root 0 Aug 10 19:42 smp_affinity
-rw-r--r-- 1 root root 0 Aug 10 19:42 smp_affinity_list
-r--r--r-- 1 root root 0 Aug 10 19:42 spurious
dr-xr-xr-x 2 root root 0 Aug 10 19:42 virtio2
$ ls -lA /proc/irq/21/qxl
total 0
$ ls -lA /proc/irq/21/virtio2
total 0
=====
{{{ on host, everything else is on guest }}}
virsh # managedsave <...
2014 Jun 26
0
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...handles requests from all vqs, and the above throughput
> data has been very close to same fio test in host side with single
> job. So more improvement should be observed once more IOthreads are
> used for handling requests from multi vqs.
>
> TODO:
> - adjust vq's irq smp_affinity according to blk-mq hw queue's cpumask
>
> V3:
> - fix use-after-free on vq->name reported by Michael
>
> V2: (suggestions from Michael and Dave Chinner)
> - allocate virtqueues' pointers dynamically
> - make sure the per-queue spinlock isn...
2006 Sep 16
2
Performance problem on a linux bridge used for shaping.
Hello,
Here is the situation. There is a machine with 3 intel gigabit card, 2
of them on PCI-X and in bridge, the 3rd is used only for management
access. The machine is a dual Xeon 2.8GHz with HT. With 2.6.8 kernel
from debian (testing) and htb with u32 on, i usually get about 30-40%
software interrupts on CPU0 and CPU2, and without htb and u32, 10% less.
Now, if I boot with 2.6.17.9 kernel,
2013 May 21
1
How to load balance interrupts of a NIC on the PCI-MSI-edge driver in CentOS? :(
...) Could anyody please help me with a solution so that iam able to
balance equally the interrupts on both cores?
b.) Is there anyway to actually assign more interrupts to this IRQ (25)
or is that something that the kernel discretly takes care of?
Thanks
Alex
[root at node2 ~]# cat /proc/irq/25/smp_affinity
3
Btw I also tried setting this value to "2" and "1" to get to choose
which core it handles, and that did work but again the problem is I can
only use one core at a time and not balance!
[root at node2 ~]# cat /proc/interrupts
CPU0 CPU1
0: 229...
2014 Jul 01
0
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...ughput
>>> data has been very close to same fio test in host side with single
>>> job. So more improvement should be observed once more IOthreads are
>>> used for handling requests from multi vqs.
>>>
>>> TODO:
>>> - adjust vq's irq smp_affinity according to blk-mq hw queue's cpumask
>>>
>>> V3:
>>> - fix use-after-free on vq->name reported by Michael
>>>
>>> V2: (suggestions from Michael and Dave Chinner)
>>> - allocate virtqueues' pointers dynamically
&...
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...io-blk-mq device, only one
IOthread handles requests from all vqs, and the above throughput
data has been very close to same fio test in host side with single
job. So more improvement should be observed once more IOthreads are
used for handling requests from multi vqs.
TODO:
- adjust vq's irq smp_affinity according to blk-mq hw queue's cpumask
V3:
- fix use-after-free on vq->name reported by Michael
V2: (suggestions from Michael and Dave Chinner)
- allocate virtqueues' pointers dynamically
- make sure the per-queue spinlock isn't kept in same cache line
- make each queue'...