Displaying 20 results from an estimated 40 matches for "tun_set_iff".
2020 Sep 23
1
Re: consuming pre-created tap - with multiqueue
...e descriptor for it.
>
> AFAIK, there is no problem with VNET_HDR, as it is a standard flag
> we've set on all tap devices on Linux for 10 years.
Looking at the kernel code, you need to set the MULTI_QUEUE flag
at time you create the device and also set it when opening the
device. In tun_set_iff():
if (!!(ifr->ifr_flags & IFF_MULTI_QUEUE) !=
!!(tun->flags & IFF_MULTI_QUEUE))
return -EINVAL;
so if you've configured QEMU to use multiqueue, the you need
to use:
$ ip tuntap add dev mytap mode tap vnet_hdr mult...
2020 Aug 30
1
Re: plug pre-created tap devices to libvirt guests
...not
configure /dev/net/tun (tap0): Permission denied
So as a note I'd say even Libvirt aside, Qemu is trying to do this as well:
https://github.com/qemu/qemu/blob/0982a56a551556c704dc15752dabf57b4be1c640/net/tap-linux.c#L104
But it's unclear where the EPERM is coming from in the kernel at tun_set_iff().
Of note, if I give Qemu a non-existing tap name, it will create it,
but if I give
it an existing tap name, I get EPERM.
2009 Apr 02
7
[Lguest] [PATCH 4/5] lguest: use KVM hypercalls
fre, 27 03 2009 kl. 10:22 +1030, skrev Rusty Russell:
> From: Matias Zabaljauregui <zabaljauregui at gmail.com>
>
> Impact: cleanup
>
> This patch allow us to use KVM hypercalls
Something has broken in relation to this change. I'm not sure it is this
change itself or one following, but I get the following error when using
lguest:
lguest: unhandled trap 6 at 0x418726
2009 Apr 02
7
[Lguest] [PATCH 4/5] lguest: use KVM hypercalls
fre, 27 03 2009 kl. 10:22 +1030, skrev Rusty Russell:
> From: Matias Zabaljauregui <zabaljauregui at gmail.com>
>
> Impact: cleanup
>
> This patch allow us to use KVM hypercalls
Something has broken in relation to this change. I'm not sure it is this
change itself or one following, but I get the following error when using
lguest:
lguest: unhandled trap 6 at 0x418726
2008 Apr 18
4
[0/6] [NET]: virtio SG/TSO patches
Hi:
Here are the patches I used for testing KVM with virtio-net using
TSO. There are three patches for the tun device which are basically
Rusty's patches with the mmap turned into copying (for correctness).
Two patches are for the virtio-net frontend, one required to support
receiving SG/TSO, and the other useful for testing SG per se. The
other patch is to the KVM backend to make all this
2008 Apr 18
4
[0/6] [NET]: virtio SG/TSO patches
Hi:
Here are the patches I used for testing KVM with virtio-net using
TSO. There are three patches for the tun device which are basically
Rusty's patches with the mmap turned into copying (for correctness).
Two patches are for the virtio-net frontend, one required to support
receiving SG/TSO, and the other useful for testing SG per se. The
other patch is to the KVM backend to make all this
2008 Jul 12
4
[PATCH] tun: Fix/rewrite packet filtering logic
...P_DEV:
/* Ethernet TAP Device */
- dev->set_multicast_list = tun_net_mclist;
-
ether_setup(dev);
- dev->change_mtu = tun_net_change_mtu;
+ dev->change_mtu = tun_net_change_mtu;
+ dev->set_multicast_list = tun_net_mclist;
- /* random address already created for us by tun_set_iff, use it */
- memcpy(dev->dev_addr, tun->dev_addr, min(sizeof(tun->dev_addr), sizeof(dev->dev_addr)) );
+ random_ether_addr(dev->dev_addr);
dev->tx_queue_len = TUN_READQ_SIZE; /* We prefer our own queue length */
break;
@@ -486,7 +565,6 @@ static ssize_t tun_chr_aio_read...
2008 Jul 12
4
[PATCH] tun: Fix/rewrite packet filtering logic
...P_DEV:
/* Ethernet TAP Device */
- dev->set_multicast_list = tun_net_mclist;
-
ether_setup(dev);
- dev->change_mtu = tun_net_change_mtu;
+ dev->change_mtu = tun_net_change_mtu;
+ dev->set_multicast_list = tun_net_mclist;
- /* random address already created for us by tun_set_iff, use it */
- memcpy(dev->dev_addr, tun->dev_addr, min(sizeof(tun->dev_addr), sizeof(dev->dev_addr)) );
+ random_ether_addr(dev->dev_addr);
dev->tx_queue_len = TUN_READQ_SIZE; /* We prefer our own queue length */
break;
@@ -486,7 +565,6 @@ static ssize_t tun_chr_aio_read...
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So the following series
implements multiple queue support in tun/tap.
In order to take advantages of this, a multi-queue capable
driver and qemu were also needed. I just rebase the latest
version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So the following series
implements multiple queue support in tun/tap.
In order to take advantages of this, a multi-queue capable
driver and qemu were also needed. I just rebase the latest
version of
2008 Jan 23
1
[PATCH 1/3] Cleanup and simplify virtnet header
1) Turn GSO on virtio net into an all-or-nothing (keep checksumming
separate). Having multiple bits is a pain: if you can't support something
you should handle it in software, which is still a performance win.
2) Make VIRTIO_NET_HDR_GSO_ECN a flag in the header, so it can apply to
IPv6 or v4.
3) Rename VIRTIO_NET_F_NO_CSUM to VIRTIO_NET_F_CSUM (ie. means we do
checksumming).
4)
2008 Jan 23
1
[PATCH 1/3] Cleanup and simplify virtnet header
1) Turn GSO on virtio net into an all-or-nothing (keep checksumming
separate). Having multiple bits is a pain: if you can't support something
you should handle it in software, which is still a performance win.
2) Make VIRTIO_NET_HDR_GSO_ECN a flag in the header, so it can apply to
IPv6 or v4.
3) Rename VIRTIO_NET_F_NO_CSUM to VIRTIO_NET_F_CSUM (ie. means we do
checksumming).
4)
2008 Aug 13
1
[PATCH 1/1] tun: TUNGETIFF interface to query name and flags
...c | 39 +++++++++++++++++++++++++++++++++++++++
include/linux/if_tun.h | 1 +
2 files changed, 40 insertions(+), 0 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index e6bbc63..95931a5 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -748,6 +748,36 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
return err;
}
+static int tun_get_iff(struct net *net, struct file *file, struct ifreq *ifr)
+{
+ struct tun_struct *tun = file->private_data;
+
+ if (!tun)
+ return -EBADFD;
+
+ DBG(KERN_INFO "%s: tun_get_iff\n", tun->dev...
2008 Aug 13
1
[PATCH 1/1] tun: TUNGETIFF interface to query name and flags
...c | 39 +++++++++++++++++++++++++++++++++++++++
include/linux/if_tun.h | 1 +
2 files changed, 40 insertions(+), 0 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index e6bbc63..95931a5 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -748,6 +748,36 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
return err;
}
+static int tun_get_iff(struct net *net, struct file *file, struct ifreq *ifr)
+{
+ struct tun_struct *tun = file->private_data;
+
+ if (!tun)
+ return -EBADFD;
+
+ DBG(KERN_INFO "%s: tun_get_iff\n", tun->dev...
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...t total_len)
return -EBADFD;
ret = tun_get_user(tun, tfile, m->msg_control, &m->msg_iter,
- m->msg_flags & MSG_DONTWAIT);
+ m->msg_flags & MSG_DONTWAIT,
+ m->msg_flags & MSG_MORE);
tun_put(tun);
return ret;
}
@@ -1770,6 +1808,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
tun->align = NET_SKB_PAD;
tun->filter_attached = false;
tun->sndbuf = tfile->socket.sk->sk_sndbuf;
+ tun->rx_batched = 0;
tun->pcpu_stats = netdev_alloc_pcpu_stats(struct tun_pcpu_stats);
if (!tun->pcpu...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...t total_len)
return -EBADFD;
ret = tun_get_user(tun, tfile, m->msg_control, &m->msg_iter,
- m->msg_flags & MSG_DONTWAIT);
+ m->msg_flags & MSG_DONTWAIT,
+ m->msg_flags & MSG_MORE);
tun_put(tun);
return ret;
}
@@ -1771,6 +1809,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
tun->align = NET_SKB_PAD;
tun->filter_attached = false;
tun->sndbuf = tfile->socket.sk->sk_sndbuf;
+ tun->rx_batched = 0;
tun->pcpu_stats = netdev_alloc_pcpu_stats(struct tun_pcpu_stats);
if (!tun->pcpu...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi:
This series tries to implement tx batching support for vhost. This was
done by using MSG_MORE as a hint for under layer socket. The backend
(e.g tap) can then batch the packets temporarily in a list and
submit it all once the number of bacthed exceeds a limitation.
Tests shows obvious improvement on guest pktgen over over
mlx4(noqueue) on host:
Mpps -+%