Displaying 20 results from an estimated 1000 matches similar to: "[PATCH] virtio_net: invoke softirqs after __napi_schedule"
2014 Nov 09
1
[PATCH] virtio-pci: Reset device on shutdown
This fixes a hanging issue during guest shutdown.
The device is left enabled even though we removed it and disabled msix
during shutdown. If the virtio device happens to get a new event right
at this point, seeing msix is disabled, it may try to notify us with an
IRQ, which is totally unexpected thus will not be handled. In this
case the guest hangs.
Let's reset the device so that it will
2014 Nov 09
1
[PATCH] virtio-pci: Reset device on shutdown
This fixes a hanging issue during guest shutdown.
The device is left enabled even though we removed it and disabled msix
during shutdown. If the virtio device happens to get a new event right
at this point, seeing msix is disabled, it may try to notify us with an
IRQ, which is totally unexpected thus will not be handled. In this
case the guest hangs.
Let's reset the device so that it will
2015 Sep 06
5
[PATCH v7] pci: quirk to skip msi disable on shutdown
On some hypervisors, virtio devices tend to generate spurious interrupts
when switching between MSI and non-MSI mode. Normally, either MSI or
non-MSI is used and all is well, but during shutdown, linux disables MSI
which then causes an "irq %d: nobody cared" message, with irq being
subsequently disabled.
Since bus mastering is already disabled at this point, disabling MSI
isn't
2015 Sep 06
5
[PATCH v7] pci: quirk to skip msi disable on shutdown
On some hypervisors, virtio devices tend to generate spurious interrupts
when switching between MSI and non-MSI mode. Normally, either MSI or
non-MSI is used and all is well, but during shutdown, linux disables MSI
which then causes an "irq %d: nobody cared" message, with irq being
subsequently disabled.
Since bus mastering is already disabled at this point, disabling MSI
isn't
2015 Sep 17
1
[PATCH v7] pci: quirk to skip msi disable on shutdown
Bjorn Helgaas <bhelgaas at google.com> writes:
> On Sun, Sep 06, 2015 at 06:32:35PM +0300, Michael S. Tsirkin wrote:
>> On some hypervisors, virtio devices tend to generate spurious interrupts
>> when switching between MSI and non-MSI mode. Normally, either MSI or
>> non-MSI is used and all is well, but during shutdown, linux disables MSI
>> which then causes an
2015 Sep 17
1
[PATCH v7] pci: quirk to skip msi disable on shutdown
Bjorn Helgaas <bhelgaas at google.com> writes:
> On Sun, Sep 06, 2015 at 06:32:35PM +0300, Michael S. Tsirkin wrote:
>> On some hypervisors, virtio devices tend to generate spurious interrupts
>> when switching between MSI and non-MSI mode. Normally, either MSI or
>> non-MSI is used and all is well, but during shutdown, linux disables MSI
>> which then causes an
2010 Mar 02
3
2.6.33 high cpu usage
With the ATI bug I was hitting earlier fixed, only my btrfs partition
continues to show high cpu usage for some operations.
Rsync, git pull, git checkout and svn up are typicall operations which
trigger the high cpu usage.
As an example, this perf report is from using git checkout to change to
a new branch; the change needed to checkout 208 files out of about 1600
total files. du(1) reports
2015 Sep 17
0
[PATCH v7] pci: quirk to skip msi disable on shutdown
On Sun, Sep 06, 2015 at 06:32:35PM +0300, Michael S. Tsirkin wrote:
> On some hypervisors, virtio devices tend to generate spurious interrupts
> when switching between MSI and non-MSI mode. Normally, either MSI or
> non-MSI is used and all is well, but during shutdown, linux disables MSI
> which then causes an "irq %d: nobody cared" message, with irq being
> subsequently
2006 Mar 15
3
softirq bound to vcpus
In "Understanding the Linux Kernel" 3rd edition, section 4.7 "Softirqs and
Tasklets" it states:
"Activation and execution [of defferable functions] are bound together: a
deferrable function that has been activated by a given CPU must be executed on
the same CPU. There is no self-evident reason suggesting that this rule is
beneficial for system performance. Binding the
2008 Sep 12
0
[PATCH] FLush pending softirqs when cpu offline
Hi, Keir,
Thanks for checking in cpu online/offline support.
Another thought inspired by Kevin, due to the time sequence that
different cpus enter the stop machine context, there is a small window
that some kind of softirqs (say softirq_A) are issued to the dying cpu
right after the dying cpu has already handled softirq_A in do_softirq
before entering stop_machine softirq. So this softirq_A
2015 Mar 12
1
[PATCH] virtio: Remove virtio device during shutdown
On Thu, 03/12 17:22, Michael S. Tsirkin wrote:
> On Wed, Mar 11, 2015 at 06:11:35PM +0800, Fam Zheng wrote:
> > On Wed, 03/11 10:06, Michael S. Tsirkin wrote:
> > > On Wed, Mar 11, 2015 at 04:09:17PM +0800, Fam Zheng wrote:
> > > > Currently shutdown is nop for virtio devices, but the core code could
> > > > remove things behind us such as MSI-X handler
2015 Mar 12
1
[PATCH] virtio: Remove virtio device during shutdown
On Thu, 03/12 17:22, Michael S. Tsirkin wrote:
> On Wed, Mar 11, 2015 at 06:11:35PM +0800, Fam Zheng wrote:
> > On Wed, 03/11 10:06, Michael S. Tsirkin wrote:
> > > On Wed, Mar 11, 2015 at 04:09:17PM +0800, Fam Zheng wrote:
> > > > Currently shutdown is nop for virtio devices, but the core code could
> > > > remove things behind us such as MSI-X handler
2018 Jan 23
2
Xen 4.6.6-9 (with XPTI meltdown mitigation) packages making their way to centos-virt-xen-testing
On Mon, Jan 22, 2018 at 10:38 PM, Nathan March <nathan at gt.net> wrote:
> Just a heads up that I'm seeing major stability problems on these builds.
> Didn't have console capture setup unfortunately, but have seen my test
> hypervisor hard lock twice over the weekend.
>
> This is with xpti being used, rather than the shim.
Thanks for the heads-up. It's been
2005 May 17
8
scheduler independent forced vcpu selection
I''m working on a new hypercall, do_confer, which allows the directed
yielding of a vcpu to another vcpu. It is mainly used when a vcpu fails
to acquire a spinlock, yielding to the lock holder instead of spinning. I
ported the ppc64 spinlock implementation for the i386 linux portion. In
implementing the hypercall, I''ve been trying to figure out how to get
the scheduler
2018 Jan 23
0
Xen 4.6.6-9 (with XPTI meltdown mitigation) packages making their way to centos-virt-xen-testing
> Thanks for the heads-up. It's been running through XenServer's tests
> as well as the XenProject's "osstest" -- I haven't heard of any
> additional issues, but I'll ask.
Looks like I can reproduce this pretty easily, this happened upon ssh'ing
into the server while I had a VM migrating into it. The system goes
completely unresponsive (can't even
2007 May 22
1
Kernel Panic in wct4xxp during unload on Zaptel-1.4.4
I attempted an upgrade of our production system from Asterisk/Zaptel 1.2 to
1.4 this weekend. Intially everything looked like it was working properly,
but some time in the day following the upgrade, the system died to a kernel
panic. I wasn't able to catch the entire kernel dump on the console
unfortunately.
I attempted to isolate the panic, and found that when 'service zaptel stop'
2006 Feb 19
3
ext3 involved in kernel panic in 2.6.13?
Dual Opteron system running ext3 atop drbd (network RAID) devices,
which, in turn, are atop LVM logical volumes. The underlying device
is hardware SCSI RAID via a LSILogic HBA. The kernel is vanilla
2.6.13 on a Gentoo-based system.
A panic occurred, which contains references to ext3 code.
I'm not sure how others manage to get these typed out, but I'm
manually typing it from
2018 Jan 23
2
Xen 4.6.6-9 (with XPTI meltdown mitigation) packages making their way to centos-virt-xen-testing
Hi,
On Tue, Jan 23, 2018 at 10:35:24AM -0800, Nathan March wrote:
> > Thanks for the heads-up. It's been running through XenServer's tests
> > as well as the XenProject's "osstest" -- I haven't heard of any
> > additional issues, but I'll ask.
>
> Looks like I can reproduce this pretty easily, this happened upon ssh'ing
> into the
2005 Dec 05
11
Xen 3.0 and Hyperthreading an issue?
Just gave 3.0 a spin. Had been running 2.0.7 for the past 3 months or so without problems (aside from intermittent failure during live migration). Anyway, 3.0 seems to have an issue with my machine. It starts up the 4 domains that I''ve got defined (was running 6 user domains with 2.0.7, but two of those were running 2.4 kernels which I can''t seem to build with Xen 3.0 yet, and
2007 Jun 13
2
HTB deadlock
Greetings,
I''ve been experiencing problems with HTB where the whole machine locks
up. This usually happens when the whole qdisc is being removed and
occasionally when a leaf is being removed.
Common is that it always happens when some sort of removal is in
progress.
Console output I have captured is at the end of this message. The same
behavior exists from vanilla 2.6.19.7 and above.