Displaying 20 results from an estimated 200 matches similar to: "[patch 0/4] Revised softlockup watchdog improvement patches"
2007 Apr 18
2
[patch 0/2] softlockup watchdog improvements
Here's couple of patches to improve the softlockup watchdog.
The first changes the softlockup timer from using jiffies to sched_clock()
as a timebase. Xen and VMI implement sched_clock() as counting unstolen
time, so time stolen by the hypervisor won't cause the watchdog to bite.
The second adds per-cpu enable flags for the watchdog timer. This allows
the timer to be disabled when the
2007 Apr 18
2
[patch 0/2] softlockup watchdog improvements
Here's couple of patches to improve the softlockup watchdog.
The first changes the softlockup timer from using jiffies to sched_clock()
as a timebase. Xen and VMI implement sched_clock() as counting unstolen
time, so time stolen by the hypervisor won't cause the watchdog to bite.
The second adds per-cpu enable flags for the watchdog timer. This allows
the timer to be disabled when the
2007 Apr 18
2
[PATCH RFC] Change softlockup watchdog to ignore stolen time
The softlockup watchdog is currently a nuisance in a virtual machine,
since the whole system could have the CPU stolen from it for a long
period of time. While it would be unlikely for a guest domain to be
denied timer interrupts for over 10s, it could happen and any softlockup
message would be completely spurious.
Earlier I proposed that sched_clock() return time in unstolen
nanoseconds, which
2007 Apr 18
2
[PATCH RFC] Change softlockup watchdog to ignore stolen time
The softlockup watchdog is currently a nuisance in a virtual machine,
since the whole system could have the CPU stolen from it for a long
period of time. While it would be unlikely for a guest domain to be
denied timer interrupts for over 10s, it could happen and any softlockup
message would be completely spurious.
Earlier I proposed that sched_clock() return time in unstolen
nanoseconds, which
2007 Jan 30
45
[PATCH] Fix softlockup issue after vcpu hotplug
Stamp softlockup thread earlier before do_timer, because the
latter is the one to actually trigger lock warning for
long-time offline. Or else, I obserevd softlockup warning
easily at manual vcpu hot-remove/plug, or when suspend cancel
into old context.
One point here is to cover both stolen and blocked time to
compare with offline threshold. vcpu hotplug falls into ''stolen''
2007 Feb 08
5
vmx status report against changeset 13826
We have tested the latest xen on VT platform with Intel 915/E8500
chipset.
Three platforms (32/PAE/32E) test all are based on SMP, It means that we
boot up SMP guest OS in VMX.
Here is the test summary:
New issue
================================================
No new issue
Issues List:
================================================
1) IA32E/PAE: 32bit Vista RTM network doesn''t
2006 Sep 01
11
BUG: soft lockup detected on CPU#0! on 3.0.2-2
BUG: soft lockup detected on CPU#0!
Pid: 2213, comm: smbiod
EIP: 0061:[<f4990f2e>] CPU: 0
EIP is at smbiod+0x116/0x16d [smbfs]
EFLAGS: 00000246 Tainted: GF (2.6.16-xen-automount #1)
EAX: 00000000 EBX: f4996400 ECX: f2c99f68 EDX: f2c98000
ESI: f2c98000 EDI: c06f5780 EBP: f2c99fb8 DS: 007b ES: 007b
CR0: 8005003b CR2: b7f77000 CR3: 326e2000 CR4: 00000640
2008 Apr 14
8
zaptel 1.4.10 regression with TE220B on Proliant DL380 G5 ?
Hi list,
After a lot of testing + troubleshooting, I guess I'm observing
what I am now calling a regression with zaptel 1.4.10 (is it?)
As such I call for peer feedback, before either asking Digium
install support or filing a bug.
Thanks in advance!
System: HP Proliant DL380 G5 with 2x PCI-X + 1x PCIe riser card
OS: Centos 5
Kernel: 2.6.18-53.1.14.el5 (also tested under
2014 Apr 01
2
[PULL] virtio-next
The following changes since commit 33807f4f0daec3b00565c2932d95f614f5833adf:
Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6 (2014-03-11 11:53:42 -0700)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux.git tags/virtio-next-for-linus
for you to fetch changes up to fc4324b4597c4eb8907207e82f9a6acec84dd335:
2014 Apr 01
2
[PULL] virtio-next
The following changes since commit 33807f4f0daec3b00565c2932d95f614f5833adf:
Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6 (2014-03-11 11:53:42 -0700)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux.git tags/virtio-next-for-linus
for you to fetch changes up to fc4324b4597c4eb8907207e82f9a6acec84dd335:
2008 Sep 28
3
[LLVMdev] llvm-ld hangs
Hi,
I'm trying to compile UML with LLVM. However, LLVM-LD hangs whiile
linking modules. It's consuming 99% of the CPU
llvm-ld -v -r -o kernel/built-in.o kernel/sched.o kernel/fork.o
kernel/exec_domain.o kernel/panic.o kernel/printk.o kernel/profile.o
kernel/exit.o kernel/itimer.o kernel/time.o kernel/softirq.o
kernel/resource.o kernel/sysctl.o kernel/capability.o kernel/ptrace.o
2022 Dec 30
1
[PATCH 3/4] virtio_ring: introduce a per virtqueue waitqueue
On Thu, Dec 29, 2022 at 4:10 PM Michael S. Tsirkin <mst at redhat.com> wrote:
>
> On Thu, Dec 29, 2022 at 04:04:13PM +0800, Jason Wang wrote:
> > On Thu, Dec 29, 2022 at 3:07 PM Michael S. Tsirkin <mst at redhat.com> wrote:
> > >
> > > On Wed, Dec 28, 2022 at 07:53:08PM +0800, Jason Wang wrote:
> > > > On Wed, Dec 28, 2022 at 2:34 PM Jason Wang
2023 Jan 27
1
[PATCH 3/4] virtio_ring: introduce a per virtqueue waitqueue
On Fri, Dec 30, 2022 at 11:43:08AM +0800, Jason Wang wrote:
> On Thu, Dec 29, 2022 at 4:10 PM Michael S. Tsirkin <mst at redhat.com> wrote:
> >
> > On Thu, Dec 29, 2022 at 04:04:13PM +0800, Jason Wang wrote:
> > > On Thu, Dec 29, 2022 at 3:07 PM Michael S. Tsirkin <mst at redhat.com> wrote:
> > > >
> > > > On Wed, Dec 28, 2022 at
2007 Apr 18
2
problem with paravirt part of series
I tried booting the paravirt patches on a real machine to see what would
happen. It had worked OK under qemu, so I thought it would be worth it.
It seems to boot OK, though perhaps fairly slowly, but once it hits
usermode it gets into trouble. When starting udevd, the startup script
runs MAKEDEV, which seems to get stuck in an infinite loop in
userspace. It eventually gets past that part
2008 Jan 20
2
BUG: soft lockup detected on CPU#?
Hello All.
I've just started looking into Xen and have a test environment in place. I'm seeing an
annoying problem that I thought worthy of a post.
Config:
I have 2 x HP DL585 servers each with 4 Dual core Opterons (non-vmx) and 16GB RAM
configured as Xen servers. These run CentOS 5.1 with the latest updates applied. These
system both attach to an iSCSI target which is an HP DL385
2023 Jan 30
1
[PATCH 3/4] virtio_ring: introduce a per virtqueue waitqueue
On Mon, Jan 30, 2023 at 03:44:24PM +0800, Jason Wang wrote:
> On Mon, Jan 30, 2023 at 1:43 PM Michael S. Tsirkin <mst at redhat.com> wrote:
> >
> > On Mon, Jan 30, 2023 at 10:53:54AM +0800, Jason Wang wrote:
> > > On Sun, Jan 29, 2023 at 3:30 PM Michael S. Tsirkin <mst at redhat.com> wrote:
> > > >
> > > > On Sun, Jan 29, 2023 at
2009 Apr 03
35
Xen system hang or freeze
Hi all,
This is my first post to the list, I hope someone out there can help!
I am running xen 3.0.3, with CentOS 5.2 based Dom0
(kernel-xen-2.6.18-92.1.22.el5)
Recently I have noticed some complete system lockups on a few different
servers. Neither Dom0 or any of the guests respond to pings, connecting a
keyboard and monitor to the system only shows a blank screen. Nothing is
written to logs
2009 Oct 23
11
soft lockups during live migrate..
Trying to migrate a 64bit PV guest with 64GB running medium to heavy load
on xen 3.4.0, it is showing lot of soft lockups. The softlockups are
causing dom0 reboot by the cluster FS. The hardware has 256GB and 32
CPUs.
Looking into the hypervisor thru kdb, I see one cpu in sh_resync_all()
while all other 31 appear spinning on the shadow_lock. I vaguely remember
seeing some thread on this while
2013 Apr 13
0
btrfs crash (and softlockup btrfs-endio-wri)
I am using NFS over brtfs (vanilla 3.8.5) for heavy CoW to clone virtual
disks with sizes 20-50GB. It worked OK for a couple of days, but
yesterday it crashed. Reboot fixed the problem and I do not see any data
corruption. I have a couple of different kdumps, I will include one as
text and attach the other ones.
I am using Fedora 18 with vanilla 3.8.5. The filesystem is created over
a SAN volume
2007 Apr 24
2
SMP lockup in virtualized environment
In a previous mail, Jeremy Fitzhardinge wrote:
> The softlockup watchdog is currently a nuisance in a virtual machine,
> since the whole system could have the CPU stolen from it for a long
> period of time. While it would be unlikely for a guest domain to be
> denied timer interrupts for over 10s, it could happen and any
> softlockup message would be completely spurious.
I wonder