similar to: [RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND

Displaying 20 results from an estimated 300 matches similar to: "[RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND"

2023 Jun 01
0
[RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND
On 6/1/23 5:47 AM, Christian Brauner wrote: > On Thu, Jun 01, 2023 at 09:58:38AM +0200, Thorsten Leemhuis wrote: >> On 19.05.23 14:15, Christian Brauner wrote: >>> On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: >>>> On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: >>>>> This patch allows the vhost and vhost_task code to
2023 Jun 01
0
[RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND
On 19.05.23 14:15, Christian Brauner wrote: > On Thu, May 18, 2023 at 10:25:11AM +0200, Christian Brauner wrote: >> On Wed, May 17, 2023 at 07:09:12PM -0500, Mike Christie wrote: >>> This patch allows the vhost and vhost_task code to use CLONE_THREAD, >>> CLONE_SIGHAND and CLONE_FILES. It's a RFC because I didn't do all the >>> normal testing,
2023 Jun 01
4
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
When switching from kthreads to vhost_tasks two bugs were added: 1. The vhost worker tasks's now show up as processes so scripts doing ps or ps a would not incorrectly detect the vhost task as another process. 2. kthreads disabled freeze by setting PF_NOFREEZE, but vhost tasks's didn't disable or add support for them. To fix both bugs, this switches the vhost task to be thread in the
2023 Jun 01
0
[RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND
On Thu, Jun 1, 2023 at 6:47?AM Christian Brauner <brauner at kernel.org> wrote: > > @Mike, do you want to prepare an updated version of the temporary fix. > If @Linus prefers to just apply it directly he can just grab it from the > list rather than delaying it. Make sure to grab a Co-developed-by line > on this, @Mike. Yeah, let's apply the known "fix the immediate
2023 May 22
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On Sun, May 21, 2023 at 09:51:24PM -0500, Mike Christie wrote: > When switching from kthreads to vhost_tasks two bugs were added: > 1. The vhost worker tasks's now show up as processes so scripts doing ps > or ps a would not incorrectly detect the vhost task as another process. > 2. kthreads disabled freeze by setting PF_NOFREEZE, but vhost tasks's > didn't disable or
2023 May 05
2
[PATCH v11 8/8] vhost: use vhost_tasks for worker threads
On 5/5/23 1:22 PM, Linus Torvalds wrote: > On Fri, May 5, 2023 at 6:40?AM Nicolas Dichtel > <nicolas.dichtel at 6wind.com> wrote: >> >> Is this an intended behavior? >> This breaks some of our scripts. > > It doesn't just break your scripts (which counts as a regression), I > think it's really wrong. > > The worker threads should show up as
2023 May 05
1
[PATCH v11 8/8] vhost: use vhost_tasks for worker threads
On Fri, May 5, 2023 at 6:40?AM Nicolas Dichtel <nicolas.dichtel at 6wind.com> wrote: > > Is this an intended behavior? > This breaks some of our scripts. It doesn't just break your scripts (which counts as a regression), I think it's really wrong. The worker threads should show up as threads of the thing that started them, not as processes. So they should show up in
2023 May 22
3
[PATCH 0/3] vhost: Fix freezer/ps regressions
The following patches made over Linus's tree fix the 2 bugs: 1. vhost worker task shows up as a process forked from the parent that did VHOST_SET_OWNER ioctl instead of a process under root/kthreadd. This was causing breaking scripts. 2. vhost_tasks didn't disable or add support for freeze requests. The following patches fix these issues by making the vhost_task task a thread under the
2023 Jun 02
2
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Hi Mike, sorry, but somehow I can't understand this patch... I'll try to read it with a fresh head on Weekend, but for example, On 06/01, Mike Christie wrote: > > static int vhost_task_fn(void *data) > { > struct vhost_task *vtsk = data; > - int ret; > + bool dead = false; > + > + for (;;) { > + bool did_work; > + > + /* mb paired w/
2023 Mar 23
2
[PATCH 1/1] vhost_task: Fix vhost_task_create return value
On Thu, Mar 23, 2023 at 12:50:49PM +0100, Christian Brauner wrote: > On Thu, Mar 23, 2023 at 07:43:04AM -0400, Michael S. Tsirkin wrote: > > On Thu, Mar 23, 2023 at 11:44:45AM +0100, Christian Brauner wrote: > > > On Thu, Mar 23, 2023 at 03:37:19AM -0400, Michael S. Tsirkin wrote: > > > > On Wed, Mar 22, 2023 at 01:56:05PM -0500, Mike Christie wrote: > > >
2023 May 22
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/22, Mike Christie wrote: > > On 5/22/23 7:30 AM, Oleg Nesterov wrote: > >> + /* > >> + * When we get a SIGKILL our release function will > >> + * be called. That will stop new IOs from being queued > >> + * and check for outstanding cmd responses. It will then > >> + * call vhost_task_stop to tell us to return and exit. >
2023 Jun 06
2
[CFT][PATCH v3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 06/06, Mike Christie wrote: > > On 6/6/23 7:16 AM, Oleg Nesterov wrote: > > On 06/05, Mike Christie wrote: > > > >> So it works like if we were using a kthread still: > >> > >> 1. Userapce thread0 opens /dev/vhost-$something. > >> 2. thread0 does VHOST_SET_OWNER ioctl. This calls vhost_task_create() to > >> create the task_struct
2004 Apr 28
1
Segmentation Fault when using dig, nslookup, host...
I have setup Centos-3.1 on a Dell Optiplex GX1 this week. It has to replace an old Compaq Deskpro I used as a firewall/router running the no longer supported RedHat 7.1. I have the NAT firewall up and running, and it works OK. However, there is one thing bugging me. I cannot use any client software that queries a nameserver. I tried to use "dig", "host" and
2023 Jun 06
1
[CFT][PATCH v3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 6/6/23 7:16 AM, Oleg Nesterov wrote: > On 06/05, Mike Christie wrote: >> >> On 6/5/23 10:10 AM, Oleg Nesterov wrote: >>> On 06/03, michael.christie at oracle.com wrote: >>>> >>>> On 6/2/23 11:15 PM, Eric W. Biederman wrote: >>>> The problem is that as part of the flush the drivers/vhost/scsi.c code >>>> will wait for
2023 Jun 06
1
[CFT][PATCH v3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 06/05, Mike Christie wrote: > > On 6/5/23 10:10 AM, Oleg Nesterov wrote: > > On 06/03, michael.christie at oracle.com wrote: > >> > >> On 6/2/23 11:15 PM, Eric W. Biederman wrote: > >> The problem is that as part of the flush the drivers/vhost/scsi.c code > >> will wait for outstanding commands, because we can't free the device and >
2008 Aug 24
2
Unusual bug in glusterfsd
Hi, I'm rather new to this project, having stumbled across it earlier this afternoon, so forgive me if I'm still trying to find my way around. I was in the need of an alternative to NFS that would let me spread the task of sharing my downloaded source code files across a couple of boxes, and GlusterFS looked like a great candidate, having had no luck with Coda or OpenAFS. I also want
2023 May 24
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/23, Eric W. Biederman wrote: > > I want to point out that we need to consider not just SIGKILL, but > SIGABRT that causes a coredump, as well as the process peforming > an ordinary exit(2). All of which will cause get_signal to return > SIGKILL in this context. Yes, but probably SIGABRT/exit doesn't really differ from SIGKILL wrt vhost_worker(). > It is probably not
2023 Jun 05
1
[CFT][PATCH v3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 6/5/23 10:10 AM, Oleg Nesterov wrote: > On 06/03, michael.christie at oracle.com wrote: >> >> On 6/2/23 11:15 PM, Eric W. Biederman wrote: >> The problem is that as part of the flush the drivers/vhost/scsi.c code >> will wait for outstanding commands, because we can't free the device and >> it's resources before the commands complete or we will hit the
2023 May 23
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Oleg Nesterov <oleg at redhat.com> writes: > On 05/22, Oleg Nesterov wrote: >> >> Right now I think that "int dead" should die, > > No, probably we shouldn't call get_signal() if we have already > dequeued SIGKILL. Very much agreed. It is one thing to add a patch to move do_exit out of get_signal. It is another to keep calling get_signal after that.
2023 May 31
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/31, Jason Wang wrote: > > On Wed, May 31, 2023 at 3:25?PM Oleg Nesterov <oleg at redhat.com> wrote: > > > > On 05/31, Jason Wang wrote: > > > > > > ? 2023/5/23 20:15, Oleg Nesterov ??: > > > > > > > > /* make sure flag is seen after deletion */ > > > > smp_wmb(); > > > >