search for: 133,19

Displaying 16 results from an estimated 16 matches for "133,19".

Did you mean: 133,12
2018 Nov 05
2
[PATCH 2/5] VSOCK: support fill data to mergeable rx buffer in host
...s(vq, vq->heads, vsock_hlen + pkt->len, + &in, likely(mergeable) ? UIO_MAXIOV : 1); + if (headcount <= 0) { spin_lock_bh(&vsock->send_pkt_list_lock); list_add(&pkt->list, &vsock->send_pkt_list); spin_unlock_bh(&vsock->send_pkt_list_lock); @@ -133,19 +201,13 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) /* We cannot finish yet if more buffers snuck in while * re-enabling notify. */ - if (unlikely(vhost_enable_notify(&vsock->dev, vq))) { + if (!headcount && unlikely(vhost_enable_notify(&vsoc...
2018 Nov 05
2
[PATCH 2/5] VSOCK: support fill data to mergeable rx buffer in host
...s(vq, vq->heads, vsock_hlen + pkt->len, + &in, likely(mergeable) ? UIO_MAXIOV : 1); + if (headcount <= 0) { spin_lock_bh(&vsock->send_pkt_list_lock); list_add(&pkt->list, &vsock->send_pkt_list); spin_unlock_bh(&vsock->send_pkt_list_lock); @@ -133,19 +201,13 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) /* We cannot finish yet if more buffers snuck in while * re-enabling notify. */ - if (unlikely(vhost_enable_notify(&vsock->dev, vq))) { + if (!headcount && unlikely(vhost_enable_notify(&vsoc...
2019 Jul 29
0
[PATCH 4/9] nouveau: factor out dmem fence completion
...++++++-------------- 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index d469bc334438..21052a4aaf69 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -133,6 +133,19 @@ static void nouveau_dmem_page_free(struct page *page) spin_unlock(&chunk->lock); } +static void nouveau_dmem_fence_done(struct nouveau_fence **fence) +{ + if (fence) { + nouveau_fence_wait(*fence, true, false); + nouveau_fence_unref(fence); + } else { + /* + * FIXME wa...
2018 Nov 06
0
[PATCH 2/5] VSOCK: support fill data to mergeable rx buffer in host
...pkt->len, > + &in, likely(mergeable) ? UIO_MAXIOV : 1); > + if (headcount <= 0) { > spin_lock_bh(&vsock->send_pkt_list_lock); > list_add(&pkt->list, &vsock->send_pkt_list); > spin_unlock_bh(&vsock->send_pkt_list_lock); > @@ -133,19 +201,13 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) > /* We cannot finish yet if more buffers snuck in while > * re-enabling notify. > */ > - if (unlikely(vhost_enable_notify(&vsock->dev, vq))) { > + if (!headcount && unlikely(v...
2007 Apr 09
10
[Bug 1306] Spurious : "chan_read_failed for istate 3" errors from sshd
http://bugzilla.mindrot.org/show_bug.cgi?id=1306 Summary: Spurious : "chan_read_failed for istate 3" errors from sshd Product: Portable OpenSSH Version: 4.6p1 Platform: Other OS/Version: All Status: NEW Severity: normal Priority: P2 Component: sshd AssignedTo:
2008 Sep 24
2
[PATCH] disable nonblocking mode on serial port
...eal with the inevitable -EAGAIN that you'd receive on writes. Maybe the delays introduced by ser_send_pace are why it might work for some people? -jim --- nut-2.2.2/drivers/serial.c 2007-09-09 15:33:15.000000000 -0400 +++ nut-2.2.2-jim/drivers/serial.c 2008-09-24 16:55:32.000000000 -0400 @@ -133,12 +133,19 @@ int ser_open(const char *port) { int fd; + int flags; fd = open(port, O_RDWR | O_NOCTTY | O_EXCL | O_NONBLOCK); if (fd < 0) ser_open_error(port); + if ((flags = fcntl(fd, F_GETFL, 0)) < 0) + ser_open_error(port); + + if (fcntl(fd, F_SETFL, flags & ~O_NONBLO...
2018 Nov 06
2
[PATCH 2/5] VSOCK: support fill data to mergeable rx buffer in host
...IO_MAXIOV : 1); >> + if (headcount <= 0) { >> spin_lock_bh(&vsock->send_pkt_list_lock); >> list_add(&pkt->list, &vsock->send_pkt_list); >> spin_unlock_bh(&vsock->send_pkt_list_lock); >> @@ -133,19 +201,13 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) >> /* We cannot finish yet if more buffers snuck in while >> * re-enabling notify. >> */ >> - if (unlikely(vhost_enable_notify(&vsock->dev, v...
2018 Nov 06
2
[PATCH 2/5] VSOCK: support fill data to mergeable rx buffer in host
...IO_MAXIOV : 1); >> + if (headcount <= 0) { >> spin_lock_bh(&vsock->send_pkt_list_lock); >> list_add(&pkt->list, &vsock->send_pkt_list); >> spin_unlock_bh(&vsock->send_pkt_list_lock); >> @@ -133,19 +201,13 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) >> /* We cannot finish yet if more buffers snuck in while >> * re-enabling notify. >> */ >> - if (unlikely(vhost_enable_notify(&vsock->dev, v...
2007 Mar 23
7
4.6p1 chan_read_failed error
The 4.6p1 sshd is logging this error during remote commands or file transfers: error: channel 0: chan_read_failed for istate 3 Platform is Solaris 8, 4.6p1 + OpenSSL 0.9.8d. The commands and transfers work correctly, so the error message appears to be spurious. The error message does not appear when processing logins. Otherwise 4.6p1 is running without any apparent problems. This error
2007 Jun 27
0
Branch 'as' - 6 commits - libswfdec/swfdec_as_interpret.c libswfdec/swfdec_movie_asprops.c libswfdec/swfdec_movie.c libswfdec/swfdec_movie.h libswfdec/swfdec_sprite.c libswfdec/swfdec_sprite.h libswfdec/swfdec_sprite_movie_as.c
...action_previous_frame (SwfdecAsCo static void swfdec_action_goto_frame (SwfdecAsContext *cx, guint action, const guint8 *data, guint len) { - SwfdecMovie *movie = swfdec_action_get_target (cx); + SwfdecSpriteMovie *movie = swfdec_action_get_target (cx); guint frame; if (len != 2) { @@ -133,8 +133,8 @@ swfdec_action_goto_frame (SwfdecAsContex } frame = GUINT16_FROM_LE (*((guint16 *) data)); if (movie) { - swfdec_movie_goto (movie, frame); - movie->stopped = TRUE; + swfdec_sprite_movie_goto (movie, frame + 1); + movie->playing = FALSE; } else { SWFDEC...
2010 Apr 23
2
[PATCH] Config: Change config to lookup dependencies by name
Conversion would fail if it was necessary to install a package, and multiple architectures of that package were already installed. This was happening specifically with device-mapper on RHEL 5 conversions. Unfortunately the flat dependency list in the config file didn't really allow this to be fixed. The best that could be done is to specify both i386 and x86_64 dependencies, but would mean
2019 Aug 08
10
turn hmm migrate_vma upside down v2
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which starts revamping the migrate_vma functionality. The prime idea is to export three slightly lower level functions and thus avoid the need for migrate_vma_ops callbacks. Diffstat: 5 files changed, 281 insertions(+), 607 deletions(-) A git tree is also available at: git://git.infradead.org/users/hch/misc.git
2019 Jul 29
24
turn the hmm migrate_vma upside down
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which starts revamping the migrate_vma functionality. The prime idea is to export three slightly lower level functions and thus avoid the need for migrate_vma_ops callbacks. Diffstat: 4 files changed, 285 insertions(+), 602 deletions(-) A git tree is also available at: git://git.infradead.org/users/hch/misc.git
2019 Aug 14
20
turn hmm migrate_vma upside down v3
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which starts revamping the migrate_vma functionality. The prime idea is to export three slightly lower level functions and thus avoid the need for migrate_vma_ops callbacks. Diffstat: 7 files changed, 282 insertions(+), 614 deletions(-) A git tree is also available at: git://git.infradead.org/users/hch/misc.git
2013 Mar 21
24
[PATCH 00/22] New virtio PCI layout
I've renewed this again, with some comments from HPA. I've tried to keep the new patches separate, so you can see the changes since we last discussed this (and so it's easy to back it out if we decide it's insane). I haven't even looked at the QEMU side so this is completely untested. Comments gratefully received! Rusty. Michael S Tsirkin (1): pci: add pci_iomap_range
2013 Mar 21
24
[PATCH 00/22] New virtio PCI layout
I've renewed this again, with some comments from HPA. I've tried to keep the new patches separate, so you can see the changes since we last discussed this (and so it's easy to back it out if we decide it's insane). I haven't even looked at the QEMU side so this is completely untested. Comments gratefully received! Rusty. Michael S Tsirkin (1): pci: add pci_iomap_range