similar to: [PATCH] vhost_net: revert upend_idx only on retriable error

Displaying 20 results from an estimated 50000 matches similar to: "[PATCH] vhost_net: revert upend_idx only on retriable error"

2023 May 11
0
[PATCH v2] vhost_net: revert upend_idx only on retriable error
On Tue, Apr 25, 2023 at 4:44?AM Andrey Smetanin <asmetanin at yandex-team.ru> wrote: > > Fix possible virtqueue used buffers leak and corresponding stuck > in case of temporary -EIO from sendmsg() which is produced by > tun driver while backend device is not up. > > In case of no-retriable error and zcopy do not revert upend_idx > to pass packet data (that is update
2015 Jul 02
1
[RFC PATCH 1/1] mshyperv: fix recognition of Hyper-V guest crash MSR's
From: Andrey Smetanin <asmetanin at virtuozzo.com> Hypervisor Top Level Functional Specification v3.1/4.0 notes that cpuid (0x40000003) EDX's 10th bit should be used to check that Hyper-V guest crash MSR's functionality available. This patch should fix this recognition. Currently the code checks EAX register instead of EDX. Signed-off-by: Andrey Smetanin <asmetanin at
2015 Jul 02
1
[RFC PATCH 1/1] mshyperv: fix recognition of Hyper-V guest crash MSR's
From: Andrey Smetanin <asmetanin at virtuozzo.com> Hypervisor Top Level Functional Specification v3.1/4.0 notes that cpuid (0x40000003) EDX's 10th bit should be used to check that Hyper-V guest crash MSR's functionality available. This patch should fix this recognition. Currently the code checks EAX register instead of EDX. Signed-off-by: Andrey Smetanin <asmetanin at
2015 Oct 09
2
[PATCH 2/2] kvm/x86: Hyper-V kvm exit
On 09/10/2015 15:39, Denis V. Lunev wrote: > From: Andrey Smetanin <asmetanin at virtuozzo.com> > > A new vcpu exit is introduced to notify the userspace of the > changes in Hyper-V synic configuraion triggered by guest writing to the > corresponding MSRs. > > Signed-off-by: Andrey Smetanin <asmetanin at virtuozzo.com> > Reviewed-by: Roman Kagan <rkagan at
2015 Oct 26
0
[Qemu-devel] [PATCH 3/7] linux-headers/kvm: add Hyper-V SynIC irq routing type and struct
On 26 October 2015 at 09:50, Andrey Smetanin <asmetanin at virtuozzo.com> wrote: > Signed-off-by: Andrey Smetanin <asmetanin at virtuozzo.com> > Reviewed-by: Roman Kagan <rkagan at virtuozzo.com> > Signed-off-by: Denis V. Lunev <den at openvz.org> > CC: Vitaly Kuznetsov <vkuznets at redhat.com> > CC: "K. Y. Srinivasan" <kys at
2015 Oct 09
2
[PATCH 2/2] kvm/x86: Hyper-V kvm exit
On 09/10/2015 15:39, Denis V. Lunev wrote: > From: Andrey Smetanin <asmetanin at virtuozzo.com> > > A new vcpu exit is introduced to notify the userspace of the > changes in Hyper-V synic configuraion triggered by guest writing to the > corresponding MSRs. > > Signed-off-by: Andrey Smetanin <asmetanin at virtuozzo.com> > Reviewed-by: Roman Kagan <rkagan at
2015 Nov 02
1
[kvm-unit-tests PATCH] x86: hyperv_synic: Hyper-V SynIC test
On 11/02/2015 03:16 PM, Paolo Bonzini wrote: > On 26/10/2015 10:56, Andrey Smetanin wrote: >> Hyper-V SynIC is a Hyper-V synthetic interrupt controller. >> >> The test runs on every vCPU and performs the following steps: >> * read from all Hyper-V SynIC MSR's >> * setup Hyper-V SynIC evt/msg pages >> * setup SINT's routing >> * inject SINT's
2015 Nov 02
1
[kvm-unit-tests PATCH] x86: hyperv_synic: Hyper-V SynIC test
On 11/02/2015 03:16 PM, Paolo Bonzini wrote: > On 26/10/2015 10:56, Andrey Smetanin wrote: >> Hyper-V SynIC is a Hyper-V synthetic interrupt controller. >> >> The test runs on every vCPU and performs the following steps: >> * read from all Hyper-V SynIC MSR's >> * setup Hyper-V SynIC evt/msg pages >> * setup SINT's routing >> * inject SINT's
2015 Oct 26
2
[Qemu-devel] [PATCH 3/7] linux-headers/kvm: add Hyper-V SynIC irq routing type and struct
On 10/26/2015 01:03 PM, Peter Maydell wrote: > On 26 October 2015 at 09:50, Andrey Smetanin <asmetanin at virtuozzo.com> wrote: >> Signed-off-by: Andrey Smetanin <asmetanin at virtuozzo.com> >> Reviewed-by: Roman Kagan <rkagan at virtuozzo.com> >> Signed-off-by: Denis V. Lunev <den at openvz.org> >> CC: Vitaly Kuznetsov <vkuznets at
2015 Oct 26
2
[Qemu-devel] [PATCH 3/7] linux-headers/kvm: add Hyper-V SynIC irq routing type and struct
On 10/26/2015 01:03 PM, Peter Maydell wrote: > On 26 October 2015 at 09:50, Andrey Smetanin <asmetanin at virtuozzo.com> wrote: >> Signed-off-by: Andrey Smetanin <asmetanin at virtuozzo.com> >> Reviewed-by: Roman Kagan <rkagan at virtuozzo.com> >> Signed-off-by: Denis V. Lunev <den at openvz.org> >> CC: Vitaly Kuznetsov <vkuznets at
2015 Oct 09
0
[PATCH 2/2] kvm/x86: Hyper-V kvm exit
From: Andrey Smetanin <asmetanin at virtuozzo.com> A new vcpu exit is introduced to notify the userspace of the changes in Hyper-V synic configuraion triggered by guest writing to the corresponding MSRs. Signed-off-by: Andrey Smetanin <asmetanin at virtuozzo.com> Reviewed-by: Roman Kagan <rkagan at virtiozzo.com> Signed-off-by: Denis V. Lunev <den at openvz.org> CC:
2015 Oct 16
0
[PATCH 9/9] kvm/x86: Hyper-V kvm exit
From: Andrey Smetanin <asmetanin at virtuozzo.com> A new vcpu exit is introduced to notify the userspace of the changes in Hyper-V SynIC configuration triggered by guest writing to the corresponding MSRs. Signed-off-by: Andrey Smetanin <asmetanin at virtuozzo.com> Reviewed-by: Roman Kagan <rkagan at virtiozzo.com> Signed-off-by: Denis V. Lunev <den at openvz.org> CC:
2015 Oct 26
1
[PATCH 3/7] linux-headers/kvm: add Hyper-V SynIC irq routing type and struct
Signed-off-by: Andrey Smetanin <asmetanin at virtuozzo.com> Reviewed-by: Roman Kagan <rkagan at virtuozzo.com> Signed-off-by: Denis V. Lunev <den at openvz.org> CC: Vitaly Kuznetsov <vkuznets at redhat.com> CC: "K. Y. Srinivasan" <kys at microsoft.com> CC: Gleb Natapov <gleb at kernel.org> CC: Paolo Bonzini <pbonzini at redhat.com> CC: Roman
2017 Sep 28
1
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
> @@ -461,6 +460,7 @@ static void handle_tx(struct vhost_net *net) > struct socket *sock; > struct vhost_net_ubuf_ref *uninitialized_var(ubufs); > bool zcopy, zcopy_used; > + int i, batched = VHOST_NET_BATCH; > > mutex_lock(&vq->mutex); > sock = vq->private_data; > @@ -475,6 +475,12 @@ static void handle_tx(struct
2017 Sep 28
1
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
> @@ -461,6 +460,7 @@ static void handle_tx(struct vhost_net *net) > struct socket *sock; > struct vhost_net_ubuf_ref *uninitialized_var(ubufs); > bool zcopy, zcopy_used; > + int i, batched = VHOST_NET_BATCH; > > mutex_lock(&vq->mutex); > sock = vq->private_data; > @@ -475,6 +475,12 @@ static void handle_tx(struct
2017 Sep 28
0
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On 2017?09?28? 08:25, Willem de Bruijn wrote: > From: Willem de Bruijn <willemb at google.com> > > Vhost-net has a hard limit on the number of zerocopy skbs in flight. > When reached, transmission stalls. Stalls cause latency, as well as > head-of-line blocking of other flows that do not use zerocopy. > > Instead of stalling, revert to copy-based transmission. > >
2017 Sep 29
0
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote: > From: Willem de Bruijn <willemb at google.com> > > Vhost-net has a hard limit on the number of zerocopy skbs in flight. > When reached, transmission stalls. Stalls cause latency, as well as > head-of-line blocking of other flows that do not use zerocopy. > > Instead of stalling, revert to copy-based
2017 Oct 06
1
[PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of
2017 Oct 06
1
[PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of
2017 Sep 28
9
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of