Displaying 20 results from an estimated 42 matches for "copy_process".
2023 Mar 23
2
[PATCH 1/1] vhost_task: Fix vhost_task_create return value
...pposed to return the vhost_task or NULL on
> > > > > failure. This fixes it to return the correct value when the allocation
> > > > > of the struct fails.
> > > > >
> > > > > Fixes: 77feab3c4156 ("vhost_task: Allow vhost layer to use copy_process") # mainline only
> > > > > Reported-by: syzbot+6b27b2d2aba1c80cc13b at syzkaller.appspotmail.com
> > > > > Signed-off-by: Mike Christie <michael.christie at oracle.com>
> > > >
> > > > Acked-by: Michael S. Tsirkin <mst at redhat....
2023 Mar 11
2
[PATCH 00/11] Use copy_process in vhost layer
On Fri, Mar 10, 2023 at 2:04?PM Mike Christie
<michael.christie at oracle.com> wrote:
>
> The following patches were made over Linus's tree and apply over next. They
> allow the vhost layer to use copy_process instead of using
> workqueue_structs to create worker threads for VM's devices.
Ok, all these patches looked fine to me from a quick scan - nothing
that I reacted to as objectionable, and several of them looked like
nice cleanups.
The only one I went "Why do you do it that way" f...
2023 May 05
1
[PATCH v11 8/8] vhost: use vhost_tasks for worker threads
...really wrong.
The worker threads should show up as threads of the thing that started
them, not as processes.
So they should show up in 'ps' only when one of the "show threads" flag is set.
But I suspect the fix is trivial: the virtio code should likely use
CLONE_THREAD for the copy_process() it does.
It should look more like "create_io_thread()" than "copy_process()", I think.
For example, do virtio worker threads really want their own signals
and files? That sounds wrong. create_io_thread() uses all of
CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_IO...
2023 May 05
2
[PATCH v11 8/8] vhost: use vhost_tasks for worker threads
...ads should show up as threads of the thing that started
> them, not as processes.
>
> So they should show up in 'ps' only when one of the "show threads" flag is set.
>
> But I suspect the fix is trivial: the virtio code should likely use
> CLONE_THREAD for the copy_process() it does.
>
> It should look more like "create_io_thread()" than "copy_process()", I think.
>
> For example, do virtio worker threads really want their own signals
> and files? That sounds wrong. create_io_thread() uses all of
>
> CLONE_FS|CLONE_FILES|C...
2023 Mar 22
2
[PATCH 1/1] vhost_task: Fix vhost_task_create return value
vhost_task_create is supposed to return the vhost_task or NULL on
failure. This fixes it to return the correct value when the allocation
of the struct fails.
Fixes: 77feab3c4156 ("vhost_task: Allow vhost layer to use copy_process") # mainline only
Reported-by: syzbot+6b27b2d2aba1c80cc13b at syzkaller.appspotmail.com
Signed-off-by: Mike Christie <michael.christie at oracle.com>
---
kernel/vhost_task.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
ind...
2012 Nov 16
5
[ 3009.778974] mcelog:16842 map pfn expected mapping type write-back for [mem 0x0009f000-0x000a0fff], got uncached-minus
...6/0x1f0
[ 3009.895488] [<ffffffff8111a65c>] unmap_vmas+0x4c/0xa0
[ 3009.905134] [<ffffffff8111c8fa>] exit_mmap+0x9a/0x180
[ 3009.914706] [<ffffffff81064e72>] mmput+0x52/0xd0
[ 3009.924252] [<ffffffff810652b7>] dup_mm+0x3c7/0x510
[ 3009.933839] [<ffffffff81065fd5>] copy_process+0xac5/0x14a0
[ 3009.943430] [<ffffffff81066af3>] do_fork+0x53/0x360
[ 3009.952843] [<ffffffff810b25c7>] ? lock_release+0x117/0x250
[ 3009.962283] [<ffffffff817d26c0>] ? _raw_spin_unlock+0x30/0x60
[ 3009.971532] [<ffffffff817d3495>] ? sysret_check+0x22/0x5d
[ 3009.980820]...
2023 May 22
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...:1;
> unsigned long stack;
> unsigned long stack_size;
> unsigned long tls;
> diff --git a/kernel/fork.c b/kernel/fork.c
> index ed4e01daccaa..9e04ab5c3946 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -2338,14 +2338,10 @@ __latent_entropy struct task_struct *copy_process(
> p->flags |= PF_KTHREAD;
> if (args->user_worker)
> p->flags |= PF_USER_WORKER;
> - if (args->io_thread) {
> - /*
> - * Mark us an IO worker, and block any signal that isn't
> - * fatal or STOP
> - */
> + if (args->io_thread)
> p-...
2023 May 22
3
[PATCH 0/3] vhost: Fix freezer/ps regressions
The following patches made over Linus's tree fix the 2 bugs:
1. vhost worker task shows up as a process forked from the parent
that did VHOST_SET_OWNER ioctl instead of a process under root/kthreadd.
This was causing breaking scripts.
2. vhost_tasks didn't disable or add support for freeze requests.
The following patches fix these issues by making the vhost_task task
a thread under the
2023 Mar 21
1
[syzbot] [kernel?] general protection fault in vhost_task_start
...t/vhost.c:580 [inline]
The return value from vhost_task_create is incorrect if the kzalloc fails.
Christian, here is a fix for what's in your tree. Do you want me to submit
a follow up patch like this or a replacement patch for:
commit 77feab3c4156 ("vhost_task: Allow vhost layer to use copy_process")
with the fix rolled into it?
>From 0677ad6d77722f301ca35e8e0f8fd0cbd5ed8484 Mon Sep 17 00:00:00 2001
From: Mike Christie <michael.christie at oracle.com>
Date: Tue, 21 Mar 2023 12:39:39 -0500
Subject: [PATCH] vhost_task: Fix vhost_task_create return value
vhost_task_create is s...
2006 Oct 01
4
Kernel BUG at arch/x86_64/mm/../../i386/mm/hypervisor.c:197
...f80285573 0000000000000000 ffff88000c1263d8
1 20:53:48 waff ffff8800210e9dd8 ffffffff80285622
1 20:53:48 waff Call Trace:
1 20:53:48 waff [<ffffffff80285573>] mm_pin+0x183/0x220
1 20:53:48 waff [<ffffffff80285622>] _arch_dup_mmap+0x12/0x20
1 20:53:48 waff [<ffffffff802220b0>] copy_process+0xc50/0x1870
1 20:53:48 waff [<ffffffff8023680f>] do_fork+0xef/0x210
1 20:53:48 waff [<ffffffff8029c652>] recalc_sigpending+0x12/0x20
1 20:53:48 waff [<ffffffff8022005d>] sigprocmask+0xfd/0x110
1 20:53:48 waff [<ffffffff80269662>] system_call+0x86/0x8b
1 20:53:48 waff [<f...
2006 May 09
5
ParaGuest cannot see 30GB memory
Hi,
I have buit Xen (32 bit) with PAE and can start multiple Paraguests with 4 gig memory, but cannot launch a single VM with more than 4 gb memory. I would like to launch 1 VM with 30GB or so memory. Are there any config paramters like kernel,/inittrd that need to be changed.
I have the ramdisk set to the initrd I used to boot xen with PAE.
Thanks
- padma
2017 May 21
2
Crash in CentOS 7 kernel-3.10.0-514.16.1.el7.x86_64 in Xen PV mode
...fff811af916>] ? copy_pte_range+0x2b6/0x5a0
[ 32.305004] [<ffffffff811af8e6>] ? copy_pte_range+0x286/0x5a0
[ 32.305004] [<ffffffff811b24d2>] ? copy_page_range+0x312/0x490
[ 32.305004] [<ffffffff81083012>] ? dup_mm+0x362/0x680
[ 32.305004] [<ffffffff810847ae>] ? copy_process+0x144e/0x1960
[ 32.305004] [<ffffffff81084e71>] ? do_fork+0x91/0x2c0
[ 32.305004] [<ffffffff81085126>] ? SyS_clone+0x16/0x20
[ 32.305004] [<ffffffff816974d9>] ? stub_clone+0x69/0x90
[ 32.305004] [<ffffffff81697189>] ? system_call_fastpath+0x16/0x1b
[ 32.305004]...
2015 Jul 21
17
[Bug 91413] New: INFO: task Xorg:2419 blocked for more than 120 seconds.
...d/0x280
Jul 21 10:11:42 dioo-XPS kernel: [ 2195.316223] [<ffffffff8106dc2e>]
native_flush_tlb_others+0x2e/0x30
Jul 21 10:11:42 dioo-XPS kernel: [ 2195.316224] [<ffffffff8106dd54>]
flush_tlb_mm_range+0x64/0x170
Jul 21 10:11:42 dioo-XPS kernel: [ 2195.316226] [<ffffffff81078642>]
copy_process.part.25+0x13c2/0x1aa0
Jul 21 10:11:42 dioo-XPS kernel: [ 2195.316227] [<ffffffff81078ed5>]
do_fork+0xd5/0x340
Jul 21 10:11:42 dioo-XPS kernel: [ 2195.316228] [<ffffffff810791c6>]
SyS_clone+0x16/0x20
Jul 21 10:11:42 dioo-XPS kernel: [ 2195.316231] [<ffffffff817cffb2>]
system_cal...
2023 Jun 01
4
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...& (PF_IO_WORKER | PF_USER_WORKER)) != PF_USER_WORKER)) {
struct core_thread self;
self.task = current;
diff --git a/kernel/fork.c b/kernel/fork.c
index ed4e01daccaa..81cba91f30bb 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2336,16 +2336,16 @@ __latent_entropy struct task_struct *copy_process(
p->flags &= ~PF_KTHREAD;
if (args->kthread)
p->flags |= PF_KTHREAD;
- if (args->user_worker)
- p->flags |= PF_USER_WORKER;
- if (args->io_thread) {
+ if (args->user_worker) {
/*
- * Mark us an IO worker, and block any signal that isn't
+ * Mark us a user...
2006 May 12
0
kernel crash
...0000 00000000 00000000
May 12 15:34:18 ioana.slack.i kernel: Call Trace:
May 12 15:34:18 ioana.slack.i kernel: [<c010c272>] init_new_context+0xfd/0x19f
May 12 15:34:18 ioana.slack.i kernel: [<c011ef60>] copy_mm+0x101/0x14f
May 12 15:34:18 ioana.slack.i kernel: [<c012017e>] copy_process+0x709/0xd4f
May 12 15:34:18 ioana.slack.i kernel: [<c01879b4>] inode_update_time+0x80/0x87
May 12 15:34:18 ioana.slack.i kernel: [<c01208bd>] do_fork+0x9b/0x1a5
May 12 15:34:18 ioana.slack.i kernel: [<c0176f9b>] pipe_write+0x1c/0x20
May 12 15:34:18 ioana.slack.i kernel: [&...
2023 Jun 02
2
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Hi Mike,
sorry, but somehow I can't understand this patch...
I'll try to read it with a fresh head on Weekend, but for example,
On 06/01, Mike Christie wrote:
>
> static int vhost_task_fn(void *data)
> {
> struct vhost_task *vtsk = data;
> - int ret;
> + bool dead = false;
> +
> + for (;;) {
> + bool did_work;
> +
> + /* mb paired w/
2009 Jul 29
2
out of memory
...gfp_mask=0xd0, order=1, oomkilladj=0
Call Trace:
[<ffffffff800c3a6a>] out_of_memory+0x8e/0x2f5
[<ffffffff8000f2eb>] __alloc_pages+0x245/0x2ce
[<ffffffff8002303f>] alloc_page_interleave+0x3d/0x74
[<ffffffff8003c061>] __get_free_pages+0xe/0x71
[<ffffffff8001eb82>] copy_process+0xc6/0x15b8
[<ffffffff8009b8bc>] alloc_pid+0x1ee/0x28a
[<ffffffff80030cdb>] do_fork+0x69/0x1c1
[<ffffffff8009d98c>] keventd_create_kthread+0x0/0xc4
[<ffffffff8005df3d>] kernel_thread+0x81/0xeb
[<ffffffff8009d98c>] keventd_create_kthread+0x0/0xc4
[<ffffffff8003...
2006 Oct 04
6
RE: Kernel BUGatarch/x86_64/mm/../../i386/mm/hypervisor.c:197
...> > <ffffffff80152249>{get_zeroed_page+73}
> > Oct 3 23:27:52 tuek <ffffffff80158cf4>{__pmd_alloc+36}
> > <ffffffff8015e55e>{copy_page_range+1262}
> > Oct 3 23:27:52 tuek <ffffffff802a6bea>{rb_insert_color+250}
> > <ffffffff80127cb7>{copy_process+3079}
> > Oct 3 23:27:52 tuek <ffffffff80128c8e>{do_fork+238}
> > <ffffffff801710d6>{fd_install+54} Oct 3 23:27:52 tuek
> > <ffffffff80134e8c>{sigprocmask+220}
> > <ffffffff8010afbe>{system_call+134}
> > Oct 3 23:27:52 tuek <ffffffff801...
2017 Oct 23
0
Crash in CentOS 7 kernel-3.10.0-514.16.1.el7.x86_64 in Xen PV mode
...opy_pte_range+0x2b6/0x5a0
> [ 32.305004] [<ffffffff811af8e6>] ? copy_pte_range+0x286/0x5a0
> [ 32.305004] [<ffffffff811b24d2>] ? copy_page_range+0x312/0x490
> [ 32.305004] [<ffffffff81083012>] ? dup_mm+0x362/0x680
> [ 32.305004] [<ffffffff810847ae>] ? copy_process+0x144e/0x1960
> [ 32.305004] [<ffffffff81084e71>] ? do_fork+0x91/0x2c0
> [ 32.305004] [<ffffffff81085126>] ? SyS_clone+0x16/0x20
> [ 32.305004] [<ffffffff816974d9>] ? stub_clone+0x69/0x90
> [ 32.305004] [<ffffffff81697189>] ? system_call_fastpath+0x16...
2008 Mar 17
1
Running CentOS 4.6 domUs on CentOS 5.1 dom0 and domUs crash
...lloc_refill+0x163/0x19c
[<c0142149>] kmem_cache_alloc+0x67/0x97
[<c0111671>] pgd_alloc+0x17/0x336
[<c01199d4>] mm_init+0xd7/0x116
[<c01199e4>] mm_init+0xe7/0x116
[<c0119c8a>] copy_mm+0xbb/0x396
[<c0141f1f>] cache_alloc_refill+0x154/0x19c
[<c011aa5a>] copy_process+0x6b5/0xb0b
[<c011af9d>] do_fork+0x8a/0x16b
[<c020f4c4>] sys_socketcall+0x113/0x202
[<c0105d2c>] sys_clone+0x24/0x28
[<c010737f>] syscall_call+0x7/0xb
Code: 74 02 66 a5 a8 01 74 01 a4 5e 5b 5e 5f c3 80 3d 04 07 2f c0 00 75
1c 6a 20 6a 00 ff 74 24 0c e8 ce 37 00 00 83 c4...