search for: hrtimer_cancel

Displaying 20 results from an estimated 44 matches for "hrtimer_cancel".

2007 Dec 21
0
[kvm-devel] [Virtio-for-kvm] [PATCH 6/13] [Mostly resend] virtio additions
...if (vi->out_max != vi->out_num) + printk("%s: out_max changed from %u to %u\n", + dev->name, vi->out_max, vi->out_num); + vi->out_max = vi->out_num; + vi->out_num = 0; + /* Kick off send immediately. */ + hrtimer_cancel(&vi->tx_timer); + vi->svq->vq_ops->kick(vi->svq); netif_stop_queue(dev); return NETDEV_TX_BUSY; } - vi->svq->vq_ops->kick(vi->svq); + if (++vi->out_num == vi->out_max) { + hrtimer_cancel(&vi->tx_timer); +...
2007 Dec 21
0
[kvm-devel] [Virtio-for-kvm] [PATCH 6/13] [Mostly resend] virtio additions
...if (vi->out_max != vi->out_num) + printk("%s: out_max changed from %u to %u\n", + dev->name, vi->out_max, vi->out_num); + vi->out_max = vi->out_num; + vi->out_num = 0; + /* Kick off send immediately. */ + hrtimer_cancel(&vi->tx_timer); + vi->svq->vq_ops->kick(vi->svq); netif_stop_queue(dev); return NETDEV_TX_BUSY; } - vi->svq->vq_ops->kick(vi->svq); + if (++vi->out_num == vi->out_max) { + hrtimer_cancel(&vi->tx_timer); +...
2020 Jan 15
1
[PATCH v2 19/21] drm/vkms: Convert to CRTC VBLANK callbacks
...ons(+), 8 deletions(-) diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c index 74f703b8d22a..ac85e17428f8 100644 --- a/drivers/gpu/drm/vkms/vkms_crtc.c +++ b/drivers/gpu/drm/vkms/vkms_crtc.c @@ -76,10 +76,12 @@ static void vkms_disable_vblank(struct drm_crtc *crtc) hrtimer_cancel(&out->vblank_hrtimer); } -bool vkms_get_vblank_timestamp(struct drm_device *dev, unsigned int pipe, - int *max_error, ktime_t *vblank_time, - bool in_vblank_irq) +static bool vkms_get_vblank_timestamp(struct drm_crtc *crtc, + int *max_error, ktime_t *vblank_time...
2018 Dec 12
5
CentOS 7.6 external USB dmesg issue
...request+0x20c/0x560 [ext4] [ 1085.193936] [<ffffffffc1c1b59b>] ext4_mb_new_blocks+0x65b/0xa20 [ext4] [ 1085.193942] [<ffffffffa967918d>] ? __getblk+0x2d/0x300 [ 1085.193961] [<ffffffffc1c223cb>] ext4_ind_map_blocks+0xb9b/0xc20 [ext4] [ 1085.193968] [<ffffffffa94c6258>] ? hrtimer_cancel+0x28/0x40 [ 1085.193973] [<ffffffffa95d82c8>] ? zone_statistics+0x88/0xa0 [ 1085.193987] [<ffffffffc1bdfd35>] ext4_map_blocks+0x295/0x6e0 [ext4] [ 1085.193993] [<ffffffffa9657c7e>] ? do_select+0x73e/0x7c0 [ 1085.193999] [<ffffffffa961c0b2>] ? kmem_cache_alloc+0x1c2/0x1f0...
2007 Apr 18
2
[patch 0/2] softlockup watchdog improvements
Here's couple of patches to improve the softlockup watchdog. The first changes the softlockup timer from using jiffies to sched_clock() as a timebase. Xen and VMI implement sched_clock() as counting unstolen time, so time stolen by the hypervisor won't cause the watchdog to bite. The second adds per-cpu enable flags for the watchdog timer. This allows the timer to be disabled when the
2007 Apr 18
2
[patch 0/2] softlockup watchdog improvements
Here's couple of patches to improve the softlockup watchdog. The first changes the softlockup timer from using jiffies to sched_clock() as a timebase. Xen and VMI implement sched_clock() as counting unstolen time, so time stolen by the hypervisor won't cause the watchdog to bite. The second adds per-cpu enable flags for the watchdog timer. This allows the timer to be disabled when the
2010 Nov 02
4
Debian/squeeze: domU live migraton hangs
Hi, In view of the problems I was having with DomU network timeout after a live migration (I posted that problems here a while ago but never got anything except from private emails) I finally updated my Debian/Squeeze Dom0s last night to a new kernel, from 2.6.32-23 to 2.6.32-26. Now live migration just hangs...Any ideas? Xen-related Debian packages (all from Debian repositary, except drbd
2020 Jan 15
0
[PATCH v2 19/21] drm/vkms: Convert to CRTC VBLANK callbacks
...iff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c > index 74f703b8d22a..ac85e17428f8 100644 > --- a/drivers/gpu/drm/vkms/vkms_crtc.c > +++ b/drivers/gpu/drm/vkms/vkms_crtc.c > @@ -76,10 +76,12 @@ static void vkms_disable_vblank(struct drm_crtc *crtc) > hrtimer_cancel(&out->vblank_hrtimer); > } > > -bool vkms_get_vblank_timestamp(struct drm_device *dev, unsigned int pipe, > - int *max_error, ktime_t *vblank_time, > - bool in_vblank_irq) > +static bool vkms_get_vblank_timestamp(struct drm_crtc *crtc, > + i...
2007 May 09
2
[PATCH 0/2 v05] lguest: TSC & hrtimers
The following patches are the latest update of the TSC and hrtimer patches I posted on 29/03. Rusty's original TSC patch has been resynced to the latest lguest repo, as has the hrtimer patch, which also incorporates feedback from Jeremy & Rusty: - Change clock event hrtimer to absolute time. 'now' is captured in the host during the hypercall. - Propagate -ETIME back to the
2007 May 09
2
[PATCH 0/2 v05] lguest: TSC & hrtimers
The following patches are the latest update of the TSC and hrtimer patches I posted on 29/03. Rusty's original TSC patch has been resynced to the latest lguest repo, as has the hrtimer patch, which also incorporates feedback from Jeremy & Rusty: - Change clock event hrtimer to absolute time. 'now' is captured in the host during the hypercall. - Propagate -ETIME back to the
2007 Apr 18
5
[patch 0/4] Revised softlockup watchdog improvement patches
Hi Ingo, This series of patches implements a number of improvements to the softlockup watchdog and its users. They are: 1. Make the watchdog ignore stolen time When running under a hypervisor, the kernel may lose an arbitrary amount of time as "stolen time". This may cause the softlockup watchdog to trigger spruiously. Xen and VMI implement sched_clock() as measuring unstolen time,
2007 Apr 18
5
[patch 0/4] Revised softlockup watchdog improvement patches
Hi Ingo, This series of patches implements a number of improvements to the softlockup watchdog and its users. They are: 1. Make the watchdog ignore stolen time When running under a hypervisor, the kernel may lose an arbitrary amount of time as "stolen time". This may cause the softlockup watchdog to trigger spruiously. Xen and VMI implement sched_clock() as measuring unstolen time,
2023 Jul 31
1
[PATCH] virtio: a new vcpu watchdog driver
...> +static int stop_stall_detector_cpu(unsigned int cpu) > +{ > + int err = 0; > + struct scatterlist sg; > + > + struct vcpu_stall_priv *vcpu_stall_detector = > + per_cpu_ptr(vcpu_stall_detectors, cpu); > + > + /* Disable the stall detector for the current CPU */ > + hrtimer_cancel(&vcpu_stall_detector->vcpu_hrtimer); > + vcpu_stall->pet_event.is_initialized = false; > + vcpu_stall->pet_event.cpu_id = cpu; > + > + spin_lock(&vcpu_stall->lock); > + sg_init_one(&sg, &vcpu_stall->pet_event, sizeof(vcpu_stall->pet_event)); > + e...
2008 Dec 10
6
[PATCH 0/6] Clean up virtio device object handling [was Re: [PATCH] virtio: make PCI devices take a virtio_pci module ref]
(Moved from kvm at vger to virtualization at linux-foundation, changed subject, cleaned up cc list) On Wed, 2008-12-10 at 13:02 +0100, Kay Sievers wrote: > On Wed, Dec 10, 2008 at 10:49, Mark McLoughlin <markmc at redhat.com> wrote: > > On Tue, 2008-12-09 at 19:16 +0100, Kay Sievers wrote: > >> On Tue, Dec 9, 2008 at 17:41, Mark McLoughlin <markmc at redhat.com>
2008 Dec 10
6
[PATCH 0/6] Clean up virtio device object handling [was Re: [PATCH] virtio: make PCI devices take a virtio_pci module ref]
(Moved from kvm at vger to virtualization at linux-foundation, changed subject, cleaned up cc list) On Wed, 2008-12-10 at 13:02 +0100, Kay Sievers wrote: > On Wed, Dec 10, 2008 at 10:49, Mark McLoughlin <markmc at redhat.com> wrote: > > On Tue, 2008-12-09 at 19:16 +0100, Kay Sievers wrote: > >> On Tue, Dec 9, 2008 at 17:41, Mark McLoughlin <markmc at redhat.com>
2020 Mar 02
0
[PATCH v1 02/11] virtio-mem: Paravirtualized memory hotplug
...sted size: 0x%llx", vm->requested_size); +} + +/* + * Workqueue function for handling plug/unplug requests and config updates. + */ +static void virtio_mem_run_wq(struct work_struct *work) +{ + struct virtio_mem *vm = container_of(work, struct virtio_mem, wq); + uint64_t diff; + int rc; + + hrtimer_cancel(&vm->retry_timer); + + if (vm->broken) + return; + +retry: + rc = 0; + + /* Make sure we start with a clean state if there are leftovers. */ + if (unlikely(vm->unplug_all_required)) + rc = virtio_mem_send_unplug_all_request(vm); + + if (atomic_read(&vm->config_changed)) { + a...
2013 Mar 25
1
A problem when mount glusterfs via NFS
HI: I run glusterfs with four nodes, 2x2 Distributed-Replicate. I mounted it via fuse and did some test, it was ok. However when I mounted it via nfs, a problem was found: When I copied 200G files to the glusterfs, the glusterfs process in the server node(mounted by client) was killed because of OOM, and all terminals of the client were hung. Trying to test for many times, I got the
2015 May 25
8
[RFC V7 PATCH 0/7] enable tx interrupts for virtio-net
Hi: This is a new version of trying to enable tx interrupts for virtio-net. We used to try to avoid tx interrupts and orphan packets before transmission for virtio-net. This breaks socket accounting and can lead serveral other side effects e.g: - Several other functions which depends on socket accounting can not work correctly (e.g TCP Small Queue) - No tx completion which make BQL or packet
2015 May 25
8
[RFC V7 PATCH 0/7] enable tx interrupts for virtio-net
Hi: This is a new version of trying to enable tx interrupts for virtio-net. We used to try to avoid tx interrupts and orphan packets before transmission for virtio-net. This breaks socket accounting and can lead serveral other side effects e.g: - Several other functions which depends on socket accounting can not work correctly (e.g TCP Small Queue) - No tx completion which make BQL or packet
2015 Feb 09
10
[PATCH RFC v5 net-next 0/6] enable tx interrupts for virtio-net
Hi: This is a new version of trying to enable tx interrupts for virtio-net. We used to try to avoid tx interrupts and orphan packets before transmission for virtio-net. This breaks socket accounting and can lead serveral other side effects e.g: - Several other functions which depends on socket accounting can not work correctly (e.g TCP Small Queue) - No tx completion which make BQL or