Displaying 19 results from an estimated 19 matches for "if_instance".
2011 Jan 26
8
[PATCH 1/8] staging: hv: Convert camel cased variables in connection.c to lower cases
Signed-off-by: Haiyang Zhang <haiyangz at microsoft.com>
Signed-off-by: Hank Janssen <hjanssen at microsoft.com>
---
drivers/staging/hv/channel.c | 48 ++++++------
drivers/staging/hv/channel_mgmt.c | 48 ++++++------
drivers/staging/hv/connection.c | 154 ++++++++++++++++++------------------
drivers/staging/hv/vmbus_drv.c | 2 +-
2011 Jan 26
8
[PATCH 1/8] staging: hv: Convert camel cased variables in connection.c to lower cases
Signed-off-by: Haiyang Zhang <haiyangz at microsoft.com>
Signed-off-by: Hank Janssen <hjanssen at microsoft.com>
---
drivers/staging/hv/channel.c | 48 ++++++------
drivers/staging/hv/channel_mgmt.c | 48 ++++++------
drivers/staging/hv/connection.c | 154 ++++++++++++++++++------------------
drivers/staging/hv/vmbus_drv.c | 2 +-
2011 Aug 25
56
[PATCH 0000/0059] Staging: hv: Driver cleanup
Further cleanup of the hv drivers.
1) Implement code for autoloading the vmbus drivers without using PCI or DMI
signatures. I have implemented this based on Greg's feedback on my earlier
implementation.
2) Cleanup error handling across the board and use standard Linux error codes.
3) General cleanup
Regards,
K. Y
2011 Aug 25
56
[PATCH 0000/0059] Staging: hv: Driver cleanup
Further cleanup of the hv drivers.
1) Implement code for autoloading the vmbus drivers without using PCI or DMI
signatures. I have implemented this based on Greg's feedback on my earlier
implementation.
2) Cleanup error handling across the board and use standard Linux error codes.
3) General cleanup
Regards,
K. Y
2019 Oct 23
0
[PATCH net-next 11/14] vsock: add multi-transports support
...on(struct vmbus_channel *chan)
new->sk_state = TCP_ESTABLISHED;
sk->sk_ack_backlog++;
- hvs_addr_init(&vnew->local_addr, if_type);
- hvs_remote_addr_init(&vnew->remote_addr, &vnew->local_addr);
-
hvs_new->vm_srv_id = *if_type;
hvs_new->host_srv_id = *if_instance;
@@ -880,6 +891,11 @@ static struct vsock_transport hvs_transport = {
};
+static bool hvs_check_transport(struct vsock_sock *vsk)
+{
+ return vsk->transport == &hvs_transport;
+}
+
static int hvs_probe(struct hv_device *hdev,
const struct hv_vmbus_device_id *dev_id)
{
@@ -92...
2019 Sep 27
0
[RFC PATCH 10/13] vsock: add multi-transports support
...on(struct vmbus_channel *chan)
new->sk_state = TCP_ESTABLISHED;
sk->sk_ack_backlog++;
- hvs_addr_init(&vnew->local_addr, if_type);
- hvs_remote_addr_init(&vnew->remote_addr, &vnew->local_addr);
-
hvs_new->vm_srv_id = *if_type;
hvs_new->host_srv_id = *if_instance;
@@ -845,6 +856,8 @@ int hvs_notify_send_post_enqueue(struct vsock_sock *vsk, ssize_t written,
}
static struct vsock_transport hvs_transport = {
+ .features = VSOCK_TRANSPORT_F_G2H,
+
.get_local_cid = hvs_get_local_cid,
.init = hvs_sock_ini...
2011 Jul 15
122
[PATCH 0000/0117] Staging: hv: Driver cleanup
Further cleanup of the hv drivers. Back in June I had sent two patch
sets to address these issues. I have addressed the comments I got from
the community on my earlier patches here:
1) Implement code for autoloading the vmbus drivers without using PCI or DMI
signatures. I have implemented this based on Greg's feedback on my earlier
implementation.
2) Cleanup error handling across
2011 Jul 15
122
[PATCH 0000/0117] Staging: hv: Driver cleanup
Further cleanup of the hv drivers. Back in June I had sent two patch
sets to address these issues. I have addressed the comments I got from
the community on my earlier patches here:
1) Implement code for autoloading the vmbus drivers without using PCI or DMI
signatures. I have implemented this based on Greg's feedback on my earlier
implementation.
2) Cleanup error handling across
2019 Nov 28
5
[RFC PATCH 0/3] vsock: support network namespace
Hi,
now that we have multi-transport upstream, I started to take a look to
support network namespace (netns) in vsock.
As we partially discussed in the multi-transport proposal [1], it could
be nice to support network namespace in vsock to reach the following
goals:
- isolate host applications from guest applications using the same ports
with CID_ANY
- assign the same CID of VMs running in
2019 Nov 28
5
[RFC PATCH 0/3] vsock: support network namespace
Hi,
now that we have multi-transport upstream, I started to take a look to
support network namespace (netns) in vsock.
As we partially discussed in the multi-transport proposal [1], it could
be nice to support network namespace in vsock to reach the following
goals:
- isolate host applications from guest applications using the same ports
with CID_ANY
- assign the same CID of VMs running in
2011 Mar 29
9
[PATCH 00/07] Remove and replace all un-needed DPRINT and printk
This patch set removes all un-needed DPRINT and printk calls and replaces
the remaining ones with the correct pr_, dev_ and netdev_ calls
from hv_vmbus, hv_netvsc, hv_timesource and hv_utils.
Several DPRINTS are remaining that will be cleaned up in my next
set of patches. They deal with printing out certain debugging that will be
implemented slightly differently.
The remaining hv_storvsc and
2011 Mar 29
9
[PATCH 00/07] Remove and replace all un-needed DPRINT and printk
This patch set removes all un-needed DPRINT and printk calls and replaces
the remaining ones with the correct pr_, dev_ and netdev_ calls
from hv_vmbus, hv_netvsc, hv_timesource and hv_utils.
Several DPRINTS are remaining that will be cleaned up in my next
set of patches. They deal with printing out certain debugging that will be
implemented slightly differently.
The remaining hv_storvsc and
2011 Sep 08
25
[PATCH 0000/0025] Staging: hv: Driver cleanup
Address Greg's VmBus audit comments:
1) Leverage driver_data field in struct hv_vmbus_device_id to
simplify driver code.
2) Make the util driver conform to the Linux Driver Model.
3) Get rid of the ext field in struct hv_device by using the
driver specific data functionality.
4) Other general cleanup.
Regards,
K. Y
2011 Sep 08
25
[PATCH 0000/0025] Staging: hv: Driver cleanup
Address Greg's VmBus audit comments:
1) Leverage driver_data field in struct hv_vmbus_device_id to
simplify driver code.
2) Make the util driver conform to the Linux Driver Model.
3) Get rid of the ext field in struct hv_device by using the
driver specific data functionality.
4) Other general cleanup.
Regards,
K. Y
2019 Nov 14
15
[PATCH net-next v2 00/15] vsock: add multi-transports support
Most of the patches are reviewed by Dexuan, Stefan, and Jorgen.
The following patches need reviews:
- [11/15] vsock: add multi-transports support
- [12/15] vsock/vmci: register vmci_transport only when VMCI guest/host
are active
- [15/15] vhost/vsock: refuse CID assigned to the guest->host transport
RFC: https://patchwork.ozlabs.org/cover/1168442/
v1:
2019 Sep 27
29
[RFC PATCH 00/13] vsock: add multi-transports support
Hi all,
this series adds the multi-transports support to vsock, following
this proposal:
https://www.spinics.net/lists/netdev/msg575792.html
With the multi-transports support, we can use vsock with nested VMs
(using also different hypervisors) loading both guest->host and
host->guest transports at the same time.
Before this series, vmci-transport supported this behavior but only
using
2019 Sep 27
29
[RFC PATCH 00/13] vsock: add multi-transports support
Hi all,
this series adds the multi-transports support to vsock, following
this proposal:
https://www.spinics.net/lists/netdev/msg575792.html
With the multi-transports support, we can use vsock with nested VMs
(using also different hypervisors) loading both guest->host and
host->guest transports at the same time.
Before this series, vmci-transport supported this behavior but only
using
2019 Oct 23
33
[PATCH net-next 00/14] vsock: add multi-transports support
This series adds the multi-transports support to vsock, following
this proposal: https://www.spinics.net/lists/netdev/msg575792.html
With the multi-transports support, we can use VSOCK with nested VMs
(using also different hypervisors) loading both guest->host and
host->guest transports at the same time.
Before this series, vmci-transport supported this behavior but only
using VMware
2019 Oct 23
33
[PATCH net-next 00/14] vsock: add multi-transports support
This series adds the multi-transports support to vsock, following
this proposal: https://www.spinics.net/lists/netdev/msg575792.html
With the multi-transports support, we can use VSOCK with nested VMs
(using also different hypervisors) loading both guest->host and
host->guest transports at the same time.
Before this series, vmci-transport supported this behavior but only
using VMware