Displaying 11 results from an estimated 11 matches for "fatal_signal_pend".
2012 Apr 22
1
[PATCH 4/5] drm/nv50: let applications hanging on vm flush to be killed
...aph.c b/drivers/gpu/drm/nouveau/nv50_graph.c
index 6899547..a61853f 100644
--- a/drivers/gpu/drm/nouveau/nv50_graph.c
+++ b/drivers/gpu/drm/nouveau/nv50_graph.c
@@ -435,6 +435,11 @@ nv84_graph_tlb_flush(struct drm_device *dev, int engine)
if ((tmp & 7) == 1)
idle = false;
}
+
+ if (fatal_signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
} while (!idle && !(timeout = ptimer->read(dev) - start > 2000000000));
if (timeout) {
--
1.7.8.5
2023 May 23
4
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...ING inside the loop? Does this mean that work->fn()
can return with current->state != RUNNING ?
work->fn(work);
Now the main question. Whatever we do, SIGKILL/SIGSTOP/etc can come right
before we call work->fn(). Is it "safe" to run this callback with
signal_pending() or fatal_signal_pending() ?
Finally. I never looked into drivers/vhost/ before so I don't understand
this code at all, but let me ask anyway... Can we change vhost_dev_flush()
to run the pending callbacks rather than wait for vhost_worker() ?
I guess we can't, ->mm won't be correct, but can you confirm...
2023 Jun 02
2
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...+ /* mb paired w/ vhost_task_stop */
> + if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags))
> + break;
> +
> + if (!dead && signal_pending(current)) {
> + struct ksignal ksig;
> + /*
> + * Calling get_signal will block in SIGSTOP,
> + * or clear fatal_signal_pending, but remember
> + * what was set.
> + *
> + * This thread won't actually exit until all
> + * of the file descriptors are closed, and
> + * the release function is called.
> + */
> + dead = get_signal(&ksig);
> + if (dead)
> + clear_thr...
2023 May 24
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/23, Eric W. Biederman wrote:
>
> I want to point out that we need to consider not just SIGKILL, but
> SIGABRT that causes a coredump, as well as the process peforming
> an ordinary exit(2). All of which will cause get_signal to return
> SIGKILL in this context.
Yes, but probably SIGABRT/exit doesn't really differ from SIGKILL wrt
vhost_worker().
> It is probably not
2023 May 22
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/22, Mike Christie wrote:
>
> On 5/22/23 7:30 AM, Oleg Nesterov wrote:
> >> + /*
> >> + * When we get a SIGKILL our release function will
> >> + * be called. That will stop new IOs from being queued
> >> + * and check for outstanding cmd responses. It will then
> >> + * call vhost_task_stop to tell us to return and exit.
>
2011 Mar 07
4
[PATCH 0/3] ocfs2: Add batched discard support
Hi all,
This patch set adds batched discard support to ocfs2. Please check. Thanks.
Regards,
Tao
2023 May 23
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...hat work->fn()
> can return with current->state != RUNNING ?
>
>
> work->fn(work);
>
> Now the main question. Whatever we do, SIGKILL/SIGSTOP/etc can come right
> before we call work->fn(). Is it "safe" to run this callback with
> signal_pending() or fatal_signal_pending() ?
>
>
> Finally. I never looked into drivers/vhost/ before so I don't understand
> this code at all, but let me ask anyway... Can we change vhost_dev_flush()
> to run the pending callbacks rather than wait for vhost_worker() ?
> I guess we can't, ->mm won't be...
2023 Jun 01
4
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...in loop has been moved from vhost_worker into
vhost_task_fn. vhost_worker now returns true if work was done.
The main loop has been updated to call get_signal which handles
SIGSTOP, freezing, and collects the message that tells the thread to
exit as part of process exit. This collection clears
__fatal_signal_pending. This collection is not guaranteed to
clear signal_pending() so clear that explicitly so the schedule()
sleeps.
For now the vhost thread continues to exist and run work until the
last file descriptor is closed and the release function is called as
part of freeing struct file. To avoid hangs i...
2012 Jan 11
12
[PATCH 00/11] Btrfs: some patches for 3.3
The biggest one is a fix for fstrim, and there''s a fix for on-disk
free space cache. Others are small fixes and cleanups.
The last three have been sent weeks ago.
The patchset is also available in this repo:
git://repo.or.cz/linux-btrfs-devel.git for-chris
Note there''s a small confict with Al Viro''s vfs changes.
Li Zefan (11):
Btrfs: add pinned extents to
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics