Displaying 20 results from an estimated 29 matches for "get_signal".
2023 May 22
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...her process.
> 2. kthreads disabled freeze by setting PF_NOFREEZE, but vhost tasks's
> didn't disable or add support for them.
>
> To fix both bugs, this switches the vhost task to be thread in the
> process that does the VHOST_SET_OWNER ioctl, and has vhost_worker call
> get_signal to support SIGKILL/SIGSTOP and freeze signals. Note that
> SIGKILL/STOP support is required because CLONE_THREAD requires
> CLONE_SIGHAND which requires those 2 signals to be suppported.
>
> This a modified version of patch originally written by Linus which
> handles his review comm...
2023 Jun 02
2
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...+
> + for (;;) {
> + bool did_work;
> +
> + /* mb paired w/ vhost_task_stop */
> + if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags))
> + break;
> +
> + if (!dead && signal_pending(current)) {
> + struct ksignal ksig;
> + /*
> + * Calling get_signal will block in SIGSTOP,
> + * or clear fatal_signal_pending, but remember
> + * what was set.
> + *
> + * This thread won't actually exit until all
> + * of the file descriptors are closed, and
> + * the release function is called.
> + */
> + dead =...
2023 May 23
4
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/22, Oleg Nesterov wrote:
>
> Right now I think that "int dead" should die,
No, probably we shouldn't call get_signal() if we have already dequeued SIGKILL.
> but let me think tomorrow.
May be something like this... I don't like it but I can't suggest anything better
right now.
bool killed = false;
for (;;) {
...
node = llist_del_all(&worker->work_list);
if (!node) {
schedule();...
2023 May 23
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Oleg Nesterov <oleg at redhat.com> writes:
> On 05/22, Oleg Nesterov wrote:
>>
>> Right now I think that "int dead" should die,
>
> No, probably we shouldn't call get_signal() if we have already
> dequeued SIGKILL.
Very much agreed. It is one thing to add a patch to move do_exit
out of get_signal. It is another to keep calling get_signal after
that. Nothing tests that case, and so we get some weird behaviors.
>> but let me think tomorrow.
>
> May b...
2023 Jun 01
4
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...detect the vhost task as another
process. 2. kthreads disabled freeze by setting PF_NOFREEZE, but
vhost tasks's didn't disable or add support for them.
To fix both bugs, this switches the vhost task to be thread in the
process that does the VHOST_SET_OWNER ioctl, and has vhost_worker call
get_signal to support SIGKILL/SIGSTOP and freeze signals. Note that
SIGKILL/STOP support is required because CLONE_THREAD requires
CLONE_SIGHAND which requires those 2 signals to be supported.
This is a modified version of the patch written by Mike Christie
<michael.christie at oracle.com> which was a...
2023 May 22
3
[PATCH 0/3] vhost: Fix freezer/ps regressions
...T_SET_OWNER ioctl instead of a process under root/kthreadd.
This was causing breaking scripts.
2. vhost_tasks didn't disable or add support for freeze requests.
The following patches fix these issues by making the vhost_task task
a thread under the process that did the VHOST_SET_OWNER and uses
get_signal() to handle freeze and SIGSTOP/KILL signals which is required
when using CLONE_THREAD (really CLONE_THREAD requires CLONE_SIGHAND
which requires SIGKILL/STOP to be supported).
2023 May 22
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...d clears TIF_SIGPENDING.
> >
> > SIGSTOP, PTRACE_INTERRUPT, freezer can come and set TIF_SIGPENDING again.
> > In this case the main for (;;) loop will spin without sleeping until
> > vhost_task_should_stop() becomes true?
>
> I see. So I either have to be able to call get_signal after SIGKILL or
> at this time work like a kthread and ignore signals like a
>
> if (dead && signal_pending())
> flush_signals()
> ?
Right now I think that "int dead" should die, and you should simply do
get_signal() + clear(SIGPENDING) if signal_pending() == T ,...
2023 May 22
1
[PATCH 1/3] signal: Don't always put SIGKILL in shared_pending
When get_pending detects the task has been marked to be killed we try to
clean up the SIGKLL by doing a sigdelset and recalc_sigpending, but we
still leave it in shared_pending. If the signal is being short circuit
delivered there is no need to put in shared_pending so this adds a check
in complete_signal.
This patch was modified from Eric Biederman <ebiederm at xmission.com>
original
2023 May 24
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/23, Eric W. Biederman wrote:
>
> I want to point out that we need to consider not just SIGKILL, but
> SIGABRT that causes a coredump, as well as the process peforming
> an ordinary exit(2). All of which will cause get_signal to return
> SIGKILL in this context.
Yes, but probably SIGABRT/exit doesn't really differ from SIGKILL wrt
vhost_worker().
> It is probably not the worst thing in the world, but what this means
> is now if you pass a copy of the vhost file descriptor to another
> process the vhost...
2019 Jun 19
0
nouveau: DRM: GPU lockup - switching to software fbcon
...10380.560416] nouveau_drm_postclose+0x4c/0xe0
[10380.560418] drm_file_free.part.0+0x1e0/0x290
[10380.560420] drm_release+0xa7/0xe0
[10380.591300] __fput+0xc7/0x250
[10380.592291] task_work_run+0x90/0xc0
[10380.593271] do_exit+0x286/0xb10
[10380.594306] do_group_exit+0x33/0xa0
[10380.595333] get_signal+0x12d/0x7e0
[10380.596304] do_signal+0x23/0x590
[10380.597490] ? __bpf_prog_run64+0x40/0x40
[10380.598441] ? __seccomp_filter+0x7e/0x430
[10380.599503] ? __x64_sys_futex+0x12c/0x145
[10380.600477] exit_to_usermode_loop+0x5d/0x70
[10380.601447] do_syscall_64+0x21f/0x2e8
[10380.602420] entry_S...
2018 Mar 27
0
BUG: corrupted list in remove_wait_queue
...?__fput+0x327/0x7e0 fs/file_table.c:209
> ?____fput+0x15/0x20 fs/file_table.c:243
> ?task_work_run+0x199/0x270 kernel/task_work.c:113
> ?exit_task_work include/linux/task_work.h:22 [inline]
> ?do_exit+0x9bb/0x1ad0 kernel/exit.c:865
> ?do_group_exit+0x149/0x400 kernel/exit.c:968
> ?get_signal+0x73a/0x16d0 kernel/signal.c:2469
> ?do_signal+0x90/0x1e90 arch/x86/kernel/signal.c:809
> ?exit_to_usermode_loop+0x258/0x2f0 arch/x86/entry/common.c:162
> ?prepare_exit_to_usermode arch/x86/entry/common.c:196 [inline]
> ?syscall_return_slowpath arch/x86/entry/common.c:265 [inline]
>...
2019 May 18
2
[Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver
...0
[ 2504.251897] __fput+0xb1/0x220
[ 2504.252260] task_work_run+0x79/0xa0
[ 2504.252676] do_exit+0x2ca/0xc10
[ 2504.253063] ? __switch_to_asm+0x40/0x70
[ 2504.253530] ? __switch_to_asm+0x34/0x70
[ 2504.253995] ? __switch_to_asm+0x40/0x70
[ 2504.254446] do_group_exit+0x35/0xa0
[ 2504.254865] get_signal+0x14e/0x7a0
[ 2504.255281] ? __switch_to_asm+0x34/0x70
[ 2504.255749] ? __switch_to_asm+0x40/0x70
[ 2504.256224] do_signal+0x2b/0x5e0
[ 2504.256619] ? __switch_to_asm+0x40/0x70
[ 2504.257086] ? __switch_to_asm+0x34/0x70
[ 2504.257552] ? __switch_to_asm+0x40/0x70
[ 2504.258022] ? __switch_to_...
2019 May 18
2
[Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver
...0
[ 2504.251897] __fput+0xb1/0x220
[ 2504.252260] task_work_run+0x79/0xa0
[ 2504.252676] do_exit+0x2ca/0xc10
[ 2504.253063] ? __switch_to_asm+0x40/0x70
[ 2504.253530] ? __switch_to_asm+0x34/0x70
[ 2504.253995] ? __switch_to_asm+0x40/0x70
[ 2504.254446] do_group_exit+0x35/0xa0
[ 2504.254865] get_signal+0x14e/0x7a0
[ 2504.255281] ? __switch_to_asm+0x34/0x70
[ 2504.255749] ? __switch_to_asm+0x40/0x70
[ 2504.256224] do_signal+0x2b/0x5e0
[ 2504.256619] ? __switch_to_asm+0x40/0x70
[ 2504.257086] ? __switch_to_asm+0x34/0x70
[ 2504.257552] ? __switch_to_asm+0x40/0x70
[ 2504.258022] ? __switch_to_...
2019 Oct 01
0
[PATCH net v3] vsock: Fix a lockdep warning in __vsock_release()
...v_sock]
> __vsock_release+0x24/0xf0 [vsock]
> __vsock_release+0xa0/0xf0 [vsock]
> vsock_release+0x12/0x30 [vsock]
> __sock_release+0x37/0xa0
> sock_close+0x14/0x20
> __fput+0xc1/0x250
> task_work_run+0x98/0xc0
> do_exit+0x344/0xc60
> do_group_exit+0x47/0xb0
> get_signal+0x15c/0xc50
> do_signal+0x30/0x720
> exit_to_usermode_loop+0x50/0xa0
> do_syscall_64+0x24e/0x270
> entry_SYSCALL_64_after_hwframe+0x49/0xbe
> RIP: 0033:0x7f4184e85f31
>
> Tested-by: Stefano Garzarella <sgarzare at redhat.com>
> Signed-off-by: Dexuan Cui <decui...
2018 Apr 16
2
[Bug 106080] New: Time-out in `nvkm_fifo_chan_child_fini()`
...bicoid.molgen.mpg.de kernel: __fput+0xa6/0x1e0
Apr 14 02:19:08 bicoid.molgen.mpg.de kernel: task_work_run+0x7e/0xa0
Apr 14 02:19:08 bicoid.molgen.mpg.de kernel: do_exit+0x2bc/0xb20
Apr 14 02:19:08 bicoid.molgen.mpg.de kernel: do_group_exit+0x33/0xa0
Apr 14 02:19:08 bicoid.molgen.mpg.de kernel: get_signal+0x1e4/0x570
Apr 14 02:19:08 bicoid.molgen.mpg.de kernel: do_signal+0x23/0x5c0
Apr 14 02:19:08 bicoid.molgen.mpg.de kernel: ? wake_up_q+0x54/0x80
Apr 14 02:19:08 bicoid.molgen.mpg.de kernel: ? SyS_futex+0x11d/0x150
Apr 14 02:19:08 bicoid.molgen.mpg.de kernel: exit_to_usermode_loop+0x79/0x90
Apr...
2019 Sep 26
0
[PATCH net v2] vsock: Fix a lockdep warning in __vsock_release()
...v_sock]
> __vsock_release+0x24/0xf0 [vsock]
> __vsock_release+0xa0/0xf0 [vsock]
> vsock_release+0x12/0x30 [vsock]
> __sock_release+0x37/0xa0
> sock_close+0x14/0x20
> __fput+0xc1/0x250
> task_work_run+0x98/0xc0
> do_exit+0x344/0xc60
> do_group_exit+0x47/0xb0
> get_signal+0x15c/0xc50
> do_signal+0x30/0x720
> exit_to_usermode_loop+0x50/0xa0
> do_syscall_64+0x24e/0x270
> entry_SYSCALL_64_after_hwframe+0x49/0xbe
> RIP: 0033:0x7f4184e85f31
>
> Signed-off-by: Dexuan Cui <decui at microsoft.com>
> ---
>
> NOTE: I only tested the c...
2019 Jun 14
2
nouveau: DRM: GPU lockup - switching to software fbcon
5.2.0-rc4-next-20190613
dmesg
nouveau 0000:01:00.0: DRM: GPU lockup - switching to software fbcon
nouveau 0000:01:00.0: fifo: SCHED_ERROR 0a [CTXSW_TIMEOUT]
nouveau 0000:01:00.0: fifo: runlist 0: scheduled for recovery
nouveau 0000:01:00.0: fifo: channel 5: killed
nouveau 0000:01:00.0: fifo: engine 6: scheduled for recovery
nouveau 0000:01:00.0: fifo: engine 0: scheduled for recovery
2019 May 20
0
[Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver
...> [ 2504.252260] task_work_run+0x79/0xa0
> [ 2504.252676] do_exit+0x2ca/0xc10
> [ 2504.253063] ? __switch_to_asm+0x40/0x70
> [ 2504.253530] ? __switch_to_asm+0x34/0x70
> [ 2504.253995] ? __switch_to_asm+0x40/0x70
> [ 2504.254446] do_group_exit+0x35/0xa0
> [ 2504.254865] get_signal+0x14e/0x7a0
> [ 2504.255281] ? __switch_to_asm+0x34/0x70
> [ 2504.255749] ? __switch_to_asm+0x40/0x70
> [ 2504.256224] do_signal+0x2b/0x5e0
> [ 2504.256619] ? __switch_to_asm+0x40/0x70
> [ 2504.257086] ? __switch_to_asm+0x34/0x70
> [ 2504.257552] ? __switch_to_asm+0x40/0x70...
2020 Mar 19
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...t_for_completion+0x250/0x250
[ 138.112158] ? lock_downgrade+0x380/0x380
[ 138.116176] ? check_flags.part.0+0x82/0x210
[ 138.120463] mmput+0xb5/0x210
[ 138.123444] do_exit+0x602/0x14c0
[ 138.126776] ? mm_update_next_owner+0x400/0x400
[ 138.131329] do_group_exit+0x8a/0x140
[ 138.135006] get_signal+0x25b/0x1080
[ 138.138606] do_signal+0x8c/0xa90
[ 138.141928] ? _raw_spin_unlock_irq+0x24/0x30
[ 138.146292] ? mark_held_locks+0x24/0x90
[ 138.150219] ? _raw_spin_unlock_irq+0x24/0x30
[ 138.154580] ? lockdep_hardirqs_on+0x190/0x280
[ 138.159026] ? setup_sigcontext+0x260/0x260
[ 138.163...
2016 Jun 15
0
[PATCH v7 00/12] Support non-lru page migration
...[<ffffffff814d56bf>] dump_stack+0x68/0x92
[ 315.186622] [<ffffffff810d5e6a>] ___might_sleep+0x3bd/0x3c9
[ 315.186625] [<ffffffff810d5fd1>] __might_sleep+0x15b/0x167
[ 315.186630] [<ffffffff810ac4c1>] exit_signals+0x7a/0x34f
[ 315.186633] [<ffffffff810ac447>] ? get_signal+0xd9b/0xd9b
[ 315.186636] [<ffffffff811aee21>] ? irq_work_queue+0x101/0x11c
[ 315.186640] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226
[ 315.186645] [<ffffffff81096357>] do_exit+0x34d/0x1b4e
[ 315.186648] [<ffffffff81130e16>] ? vprintk_emit+0x4b1/0x4d3
[...