search for: 653,7

Displaying 20 results from an estimated 101 matches for "653,7".

Did you mean: 353,7
2023 Apr 17
1
[PATCH v3] drm/nouveau: fix incorrect conversion to dma_resv_wait_timeout()
...veau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, > struct drm_nouveau_gem_pushbuf_reloc *reloc, > struct drm_nouveau_gem_pushbuf_bo *bo) > { > - long ret = 0; > + int ret = 0; > unsigned i; > > for (i = 0; i < req->nr_relocs; i++) { > @@ -653,6 +653,7 @@ nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, > struct drm_nouveau_gem_pushbuf_bo *b; > struct nouveau_bo *nvbo; > uint32_t data; > + long lret; > > if (unlikely(r->bo_index >= req->nr_buffers)) { > NV_PRINTK(err, cli, &q...
2007 Jan 02
2
Return value from an action function
I had always assumed that an action option should return true if it handles the action, but it seems like most button bindings actually return false which causes a few problems. 1. The clicks pass through to windows which is not good for rotate, screenshot or annotate. 2. I am trying to add a generic action notification which plugins can wrap to see when other actions are initiated and
2023 Apr 15
2
[PATCH v2] drm/nouveau: fix incorrect conversion to dma_resv_wait_timeout()
Commit 41d351f29528 ("drm/nouveau: stop using ttm_bo_wait") converted from ttm_bo_wait_ctx() to dma_resv_wait_timeout(). However, dma_resv_wait_timeout() returns greater than zero on success as opposed to ttm_bo_wait_ctx(). As a result, relocs will fail and log errors even when it was a success. Change the return code handling to match that of nouveau_gem_ioctl_cpu_prep(), which was
2016 May 30
1
[PATCH V2 1/2] vhost_net: stop polling socket during rx processing
...ll, sock->file); > +} > + > static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > struct vhost_virtqueue *vq, > struct iovec iov[], unsigned int iov_size, BTW we might want to rename these functions, name no longer reflects function ... > @@ -627,6 +653,7 @@ static void handle_rx(struct vhost_net *net) > if (!sock) > goto out; > vhost_disable_notify(&net->dev, vq); > + vhost_net_disable_vq(net, vq); > > vhost_hlen = nvq->vhost_hlen; > sock_hlen = nvq->sock_hlen; > @@ -715,9 +742,10 @@ static void h...
2016 May 30
1
[PATCH V2 1/2] vhost_net: stop polling socket during rx processing
...ll, sock->file); > +} > + > static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > struct vhost_virtqueue *vq, > struct iovec iov[], unsigned int iov_size, BTW we might want to rename these functions, name no longer reflects function ... > @@ -627,6 +653,7 @@ static void handle_rx(struct vhost_net *net) > if (!sock) > goto out; > vhost_disable_notify(&net->dev, vq); > + vhost_net_disable_vq(net, vq); > > vhost_hlen = nvq->vhost_hlen; > sock_hlen = nvq->sock_hlen; > @@ -715,9 +742,10 @@ static void h...
2016 May 30
4
[PATCH V2 0/2] vhost_net polling optimization
Hi: This series tries to optimize vhost_net polling at two points: - Stop rx polling for reduicng the unnecessary wakeups during handle_rx(). - Conditonally enable tx polling for reducing the unnecessary traversing and spinlock touching. Test shows about 17% improvement on rx pps. Please review Changes from V1: - use vhost_net_disable_vq()/vhost_net_enable_vq() instead of open coding. -
2016 May 30
4
[PATCH V2 0/2] vhost_net polling optimization
Hi: This series tries to optimize vhost_net polling at two points: - Stop rx polling for reduicng the unnecessary wakeups during handle_rx(). - Conditonally enable tx polling for reducing the unnecessary traversing and spinlock touching. Test shows about 17% improvement on rx pps. Please review Changes from V1: - use vhost_net_disable_vq()/vhost_net_enable_vq() instead of open coding. -
2007 Nov 04
0
7 commits - libswfdec/swfdec_text_field_movie.c libswfdec/swfdec_text_field_movie.h libswfdec/swfdec_text_field_movie_html.c
...+0200 Stop iterating in TextField's render method when we have passed vertical limit diff --git a/libswfdec/swfdec_text_field_movie.c b/libswfdec/swfdec_text_field_movie.c index 16e58db..90c2990 100644 --- a/libswfdec/swfdec_text_field_movie.c +++ b/libswfdec/swfdec_text_field_movie.c @@ -653,7 +653,7 @@ swfdec_text_field_movie_render (SwfdecMovie *movie, cairo_t *cr, y = movie->original_extents.y0 + EXTRA_MARGIN; cairo_move_to (cr, x, y); - for (i = 0; layouts[i].layout != NULL/* && y < limit.y1*/; i++) + for (i = 0; layouts[i].layout != NULL && y < l...
2008 May 21
3
[LLVMdev] 2.3 Pre-release available for testing
Razvan Aciu wrote: > As I saw from the mailing list the MSVC 2005 patches were made to take into > account the new files from the development branch, files which are not in > the 2.3 release. So for now the below patch is the only one functional for > the release. If I am wrong, please someone correct me. > > If someone can make a 2005 patch for the release branch, it is ok.
2017 Apr 10
0
[PATCH 09/11] nvkm/ramgf100: Hook up ram training pattern init for NVC0+
...anged, 1 insertion(+), 2 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgf100.c index a469719..eebd20b 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgf100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgf100.c @@ -653,8 +653,7 @@ gf100_ram_init(struct nvkm_ram *base) /* XXX Why does the blob do this? */ nvkm_mask(device, 0x137360, 0x00000002, 0x00000000); - /* XXX: Don't hook up yet for bisectability */ - return 0; + return gf100_ram_train_init(base); } static const struct nvkm_ram_func -- 2.9.3
2018 Jul 03
0
[PATCH v2 net-next 3/4] vhost_net: Avoid rx queue wake-ups during busypoll
...ki Makita <makita.toshiaki at lab.ntt.co.jp> --- drivers/vhost/net.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 811c0e5..791bc8b 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -653,7 +653,8 @@ static void vhost_rx_signal_used(struct vhost_net_virtqueue *nvq) nvq->done_idx = 0; } -static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk) +static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk, + bool *busyloop_intr) {...
2000 Dec 27
1
supervise and openssh (fwd)
fyi, not all changes are integrated. -------------- next part -------------- An embedded message was scrubbed... From: blinky <blinky at gmx.net> Subject: Re: supervise and openssh Date: Wed, 27 Dec 2000 14:33:59 +0100 Size: 1399 Url: http://lists.mindrot.org/pipermail/openssh-unix-dev/attachments/20001227/02628ef8/attachment.mht
2016 May 30
0
[PATCH V2 1/2] vhost_net: stop polling socket during rx processing
...s); + struct socket *sock; + + sock = vq->private_data; + if (!sock) + return 0; + + return vhost_poll_start(poll, sock->file); +} + static int vhost_net_tx_get_vq_desc(struct vhost_net *net, struct vhost_virtqueue *vq, struct iovec iov[], unsigned int iov_size, @@ -627,6 +653,7 @@ static void handle_rx(struct vhost_net *net) if (!sock) goto out; vhost_disable_notify(&net->dev, vq); + vhost_net_disable_vq(net, vq); vhost_hlen = nvq->vhost_hlen; sock_hlen = nvq->sock_hlen; @@ -715,9 +742,10 @@ static void handle_rx(struct vhost_net *net) total...
2007 Apr 18
0
[PATCH 7/12] gdt-accessor
...); cpu = get_cpu(); - save_desc_40 = per_cpu(cpu_gdt_table, cpu)[0x40 / 8]; - per_cpu(cpu_gdt_table, cpu)[0x40 / 8] = bad_bios_desc; + gdt = get_cpu_gdt_table(cpu); + save_desc_40 = gdt[desc_number(0x40)]; + gdt[desc_number(0x40)] = bad_bios_desc; local_save_flags(flags); APM_DO_CLI; @@ -653,7 +656,7 @@ error = apm_bios_call_simple_asm(func, ebx_in, ecx_in, eax); APM_DO_RESTORE_SEGS; local_irq_restore(flags); - __get_cpu_var(cpu_gdt_table)[0x40 / 8] = save_desc_40; + gdt[desc_number(0x40)] = save_desc_40; put_cpu(); apm_restore_cpus(cpus); return error; @@ -2295,35 +2298,36...
2007 Apr 18
0
[PATCH 7/12] gdt-accessor
...); cpu = get_cpu(); - save_desc_40 = per_cpu(cpu_gdt_table, cpu)[0x40 / 8]; - per_cpu(cpu_gdt_table, cpu)[0x40 / 8] = bad_bios_desc; + gdt = get_cpu_gdt_table(cpu); + save_desc_40 = gdt[desc_number(0x40)]; + gdt[desc_number(0x40)] = bad_bios_desc; local_save_flags(flags); APM_DO_CLI; @@ -653,7 +656,7 @@ error = apm_bios_call_simple_asm(func, ebx_in, ecx_in, eax); APM_DO_RESTORE_SEGS; local_irq_restore(flags); - __get_cpu_var(cpu_gdt_table)[0x40 / 8] = save_desc_40; + gdt[desc_number(0x40)] = save_desc_40; put_cpu(); apm_restore_cpus(cpus); return error; @@ -2295,35 +2298,36...
2007 Apr 18
1
[PATCH 2/3] Gdt_accessor
...s = apm_save_cpus(); cpu = get_cpu(); - save_desc_40 = per_cpu(cpu_gdt_table, cpu)[0x40 / 8]; - per_cpu(cpu_gdt_table, cpu)[0x40 / 8] = bad_bios_desc; + gdt = get_cpu_gdt_table(cpu); + save_desc_40 = gdt[0x40 / 8]; + gdt[0x40 / 8] = bad_bios_desc; local_save_flags(flags); APM_DO_CLI; @@ -653,7 +656,7 @@ static u8 apm_bios_call_simple(u32 func, error = apm_bios_call_simple_asm(func, ebx_in, ecx_in, eax); APM_DO_RESTORE_SEGS; local_irq_restore(flags); - __get_cpu_var(cpu_gdt_table)[0x40 / 8] = save_desc_40; + gdt[0x40 / 8] = save_desc_40; put_cpu(); apm_restore_cpus(cpus); re...
2007 Apr 18
1
[PATCH 2/3] Gdt_accessor
...s = apm_save_cpus(); cpu = get_cpu(); - save_desc_40 = per_cpu(cpu_gdt_table, cpu)[0x40 / 8]; - per_cpu(cpu_gdt_table, cpu)[0x40 / 8] = bad_bios_desc; + gdt = get_cpu_gdt_table(cpu); + save_desc_40 = gdt[0x40 / 8]; + gdt[0x40 / 8] = bad_bios_desc; local_save_flags(flags); APM_DO_CLI; @@ -653,7 +656,7 @@ static u8 apm_bios_call_simple(u32 func, error = apm_bios_call_simple_asm(func, ebx_in, ecx_in, eax); APM_DO_RESTORE_SEGS; local_irq_restore(flags); - __get_cpu_var(cpu_gdt_table)[0x40 / 8] = save_desc_40; + gdt[0x40 / 8] = save_desc_40; put_cpu(); apm_restore_cpus(cpus); re...
2007 Apr 18
2
[PATCH 8/14] i386 / Add a per cpu gdt accessor
...cpu = get_cpu(); - save_desc_40 = per_cpu(cpu_gdt_table, cpu)[0x40 / 8]; - per_cpu(cpu_gdt_table, cpu)[0x40 / 8] = bad_bios_desc; + gdt = get_cpu_gdt_table(cpu); + save_desc_40 = gdt[segment_index(0x40)]; + gdt[segment_index(0x40)] = bad_bios_desc; local_save_flags(flags); APM_DO_CLI; @@ -653,7 +656,7 @@ error = apm_bios_call_simple_asm(func, ebx_in, ecx_in, eax); APM_DO_RESTORE_SEGS; local_irq_restore(flags); - __get_cpu_var(cpu_gdt_table)[0x40 / 8] = save_desc_40; + gdt[segment_index(0x40)] = save_desc_40; put_cpu(); apm_restore_cpus(cpus); return error; @@ -2295,35 +2298,...
2007 Apr 18
2
[PATCH 8/14] i386 / Add a per cpu gdt accessor
...cpu = get_cpu(); - save_desc_40 = per_cpu(cpu_gdt_table, cpu)[0x40 / 8]; - per_cpu(cpu_gdt_table, cpu)[0x40 / 8] = bad_bios_desc; + gdt = get_cpu_gdt_table(cpu); + save_desc_40 = gdt[segment_index(0x40)]; + gdt[segment_index(0x40)] = bad_bios_desc; local_save_flags(flags); APM_DO_CLI; @@ -653,7 +656,7 @@ error = apm_bios_call_simple_asm(func, ebx_in, ecx_in, eax); APM_DO_RESTORE_SEGS; local_irq_restore(flags); - __get_cpu_var(cpu_gdt_table)[0x40 / 8] = save_desc_40; + gdt[segment_index(0x40)] = save_desc_40; put_cpu(); apm_restore_cpus(cpus); return error; @@ -2295,35 +2298,...
2013 May 07
5
[PATCH 0/4] vhost private_data rcu removal
Asias He (4): vhost-net: Always access vq->private_data under vq mutex vhost-test: Always access vq->private_data under vq mutex vhost-scsi: Always access vq->private_data under vq mutex vhost: Remove custom vhost rcu usage drivers/vhost/net.c | 37 ++++++++++++++++--------------------- drivers/vhost/scsi.c | 17 ++++++----------- drivers/vhost/test.c | 20