similar to: [PATCH net 0/2] vsock/virtio: fix null-pointer dereference and related precautions

Displaying 20 results from an estimated 400 matches similar to: "[PATCH net 0/2] vsock/virtio: fix null-pointer dereference and related precautions"

2019 Mar 05
4
[PATCH] vsock/virtio: fix kernel panic from virtio_transport_reset_no_sock
Previous to commit 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device hot-unplug"), vsock_core_init() was called from virtio_vsock_probe(). Now, virtio_transport_reset_no_sock() can be called before vsock_core_init() has the chance to run. [Wed Feb 27 14:17:09 2019] BUG: unable to handle kernel NULL pointer dereference at 0000000000000110 [Wed Feb 27 14:17:09 2019] #PF error:
2019 Mar 05
4
[PATCH] vsock/virtio: fix kernel panic from virtio_transport_reset_no_sock
Previous to commit 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device hot-unplug"), vsock_core_init() was called from virtio_vsock_probe(). Now, virtio_transport_reset_no_sock() can be called before vsock_core_init() has the chance to run. [Wed Feb 27 14:17:09 2019] BUG: unable to handle kernel NULL pointer dereference at 0000000000000110 [Wed Feb 27 14:17:09 2019] #PF error:
2019 Mar 06
2
[PATCH v2] vsock/virtio: fix kernel panic from virtio_transport_reset_no_sock
Previous to commit 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device hot-unplug"), vsock_core_init() was called from virtio_vsock_probe(). Now, virtio_transport_reset_no_sock() can be called before vsock_core_init() has the chance to run. [Wed Feb 27 14:17:09 2019] BUG: unable to handle kernel NULL pointer dereference at 0000000000000110 [Wed Feb 27 14:17:09 2019] #PF error:
2019 Mar 06
2
[PATCH v2] vsock/virtio: fix kernel panic from virtio_transport_reset_no_sock
Previous to commit 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device hot-unplug"), vsock_core_init() was called from virtio_vsock_probe(). Now, virtio_transport_reset_no_sock() can be called before vsock_core_init() has the chance to run. [Wed Feb 27 14:17:09 2019] BUG: unable to handle kernel NULL pointer dereference at 0000000000000110 [Wed Feb 27 14:17:09 2019] #PF error:
2016 Jul 28
6
[RFC v6 0/6] Add virtio transport for AF_VSOCK
This series is based on v4.7. This RFC is the implementation for the new VIRTIO Socket device. It is developed in parallel with the VIRTIO device specification and proves the design. Once the specification has been accepted I will send a non-RFC version of this patch series. v6: * Add VHOST_VSOCK_SET_RUNNING ioctl to start/stop vhost cleanly * Add graceful shutdown to avoid port reuse while
2016 Jul 28
6
[RFC v6 0/6] Add virtio transport for AF_VSOCK
This series is based on v4.7. This RFC is the implementation for the new VIRTIO Socket device. It is developed in parallel with the VIRTIO device specification and proves the design. Once the specification has been accepted I will send a non-RFC version of this patch series. v6: * Add VHOST_VSOCK_SET_RUNNING ioctl to start/stop vhost cleanly * Add graceful shutdown to avoid port reuse while
2019 Nov 14
15
[PATCH net-next v2 00/15] vsock: add multi-transports support
Most of the patches are reviewed by Dexuan, Stefan, and Jorgen. The following patches need reviews: - [11/15] vsock: add multi-transports support - [12/15] vsock/vmci: register vmci_transport only when VMCI guest/host are active - [15/15] vhost/vsock: refuse CID assigned to the guest->host transport RFC: https://patchwork.ozlabs.org/cover/1168442/ v1:
2019 Sep 27
29
[RFC PATCH 00/13] vsock: add multi-transports support
Hi all, this series adds the multi-transports support to vsock, following this proposal: https://www.spinics.net/lists/netdev/msg575792.html With the multi-transports support, we can use vsock with nested VMs (using also different hypervisors) loading both guest->host and host->guest transports at the same time. Before this series, vmci-transport supported this behavior but only using
2019 Sep 27
29
[RFC PATCH 00/13] vsock: add multi-transports support
Hi all, this series adds the multi-transports support to vsock, following this proposal: https://www.spinics.net/lists/netdev/msg575792.html With the multi-transports support, we can use vsock with nested VMs (using also different hypervisors) loading both guest->host and host->guest transports at the same time. Before this series, vmci-transport supported this behavior but only using
2019 Nov 28
5
[RFC PATCH 0/3] vsock: support network namespace
Hi, now that we have multi-transport upstream, I started to take a look to support network namespace (netns) in vsock. As we partially discussed in the multi-transport proposal [1], it could be nice to support network namespace in vsock to reach the following goals: - isolate host applications from guest applications using the same ports with CID_ANY - assign the same CID of VMs running in
2019 Nov 28
5
[RFC PATCH 0/3] vsock: support network namespace
Hi, now that we have multi-transport upstream, I started to take a look to support network namespace (netns) in vsock. As we partially discussed in the multi-transport proposal [1], it could be nice to support network namespace in vsock to reach the following goals: - isolate host applications from guest applications using the same ports with CID_ANY - assign the same CID of VMs running in
2019 Oct 23
33
[PATCH net-next 00/14] vsock: add multi-transports support
This series adds the multi-transports support to vsock, following this proposal: https://www.spinics.net/lists/netdev/msg575792.html With the multi-transports support, we can use VSOCK with nested VMs (using also different hypervisors) loading both guest->host and host->guest transports at the same time. Before this series, vmci-transport supported this behavior but only using VMware
2019 Oct 23
33
[PATCH net-next 00/14] vsock: add multi-transports support
This series adds the multi-transports support to vsock, following this proposal: https://www.spinics.net/lists/netdev/msg575792.html With the multi-transports support, we can use VSOCK with nested VMs (using also different hypervisors) loading both guest->host and host->guest transports at the same time. Before this series, vmci-transport supported this behavior but only using VMware
2019 Mar 06
0
[PATCH] vsock/virtio: fix kernel panic from virtio_transport_reset_no_sock
Hi Adalbert, thanks for catching this issue, I have a comment below. On Tue, Mar 05, 2019 at 08:01:45PM +0200, Adalbert Laz?r wrote: > Previous to commit 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device hot-unplug"), > vsock_core_init() was called from virtio_vsock_probe(). Now, > virtio_transport_reset_no_sock() can be called before vsock_core_init() > has the
2016 Dec 07
7
[PATCH 0/4] vsock: cancel connect packets when failing to connect
Currently, if a connect call fails on a signal or timeout (e.g., guest is still in the process of starting up), we'll just return to caller and leave the connect packet queued and they are sent even though the connection is considered a failure, which can confuse applications with unwanted false connect attempt. The patchset enables vsock (both host and guest) to cancel queued packets when a
2016 Dec 07
7
[PATCH 0/4] vsock: cancel connect packets when failing to connect
Currently, if a connect call fails on a signal or timeout (e.g., guest is still in the process of starting up), we'll just return to caller and leave the connect packet queued and they are sent even though the connection is considered a failure, which can confuse applications with unwanted false connect attempt. The patchset enables vsock (both host and guest) to cancel queued packets when a
2016 Dec 07
8
[PATCH v2 0/4] vsock: cancel connect packets when failing to connect
Currently, if a connect call fails on a signal or timeout (e.g., guest is still in the process of starting up), we'll just return to caller and leave the connect packet queued and they are sent even though the connection is considered a failure, which can confuse applications with unwanted false connect attempt. The patchset enables vsock (both host and guest) to cancel queued packets when a
2016 Dec 07
8
[PATCH v2 0/4] vsock: cancel connect packets when failing to connect
Currently, if a connect call fails on a signal or timeout (e.g., guest is still in the process of starting up), we'll just return to caller and leave the connect packet queued and they are sent even though the connection is considered a failure, which can confuse applications with unwanted false connect attempt. The patchset enables vsock (both host and guest) to cancel queued packets when a
2016 Dec 06
26
[PATCH 00/10] virtio: sparse fixes
I run latest sparse from git on virtio drivers (turns out the version I had was rather outdated). This patchset fixes a couple of bugs this uncovered, and adds some annotations to make it sparse-clean. In particular, endian-ness is often tricky, so this patchset enabled endian-ness checks for sparse builds. Michael S. Tsirkin (10): virtio_console: drop unused config fields drm/virtio: fix
2016 Dec 06
26
[PATCH 00/10] virtio: sparse fixes
I run latest sparse from git on virtio drivers (turns out the version I had was rather outdated). This patchset fixes a couple of bugs this uncovered, and adds some annotations to make it sparse-clean. In particular, endian-ness is often tricky, so this patchset enabled endian-ness checks for sparse builds. Michael S. Tsirkin (10): virtio_console: drop unused config fields drm/virtio: fix