search for: tun_chr_close

Displaying 5 results from an estimated 5 matches for "tun_chr_close".

2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
...; represents whether this device has ever been attached to a device. > > Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au> > > > Cheers, > > > diff --git a/drivers/net/tun.c b/drivers/net/tun.c > > ..... > > @@ -1275,20 +1278,18 @@ static int tun_chr_close(struct inode *inode, struct file *file) > struct tun_file *tfile = file->private_data; > struct tun_struct *tun = __tun_get(tfile); > > - > if (tun) { > - DBG(KERN_INFO "%s: tun_chr_close\n", tun->dev->name); > - > - rtnl_lock(); > - __tun_de...
2009 Apr 16
1
[1/2] tun: Only free a netdev when all tun descriptors are closed
...; represents whether this device has ever been attached to a device. > > Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au> > > > Cheers, > > > diff --git a/drivers/net/tun.c b/drivers/net/tun.c > > ..... > > @@ -1275,20 +1278,18 @@ static int tun_chr_close(struct inode *inode, struct file *file) > struct tun_file *tfile = file->private_data; > struct tun_struct *tun = __tun_get(tfile); > > - > if (tun) { > - DBG(KERN_INFO "%s: tun_chr_close\n", tun->dev->name); > - > - rtnl_lock(); > - __tun_de...
2012 Sep 30
21
Xen 4.0.4, kernel 3.5.0 HVM crash and kernel BUG
...e8f ffff8800377b0400 ffffffff8134b6cd ffff8800389ffc28 ffff8800389ffc28 ffff8800377b00f8 ffff8800377b0680 ffff880038cdcd60 ffff8800377b0000 Call Trace: [<ffffffff8133ce8f>] ? sk_release_kernel+0x23/0x39 [<ffffffff8134b6cd>] ? netdev_run_todo+0x1e9/0x206 [<ffffffff8129798f>] ? tun_chr_close+0x4c/0x7b [<ffffffff810b39d3>] ? fput+0xe4/0x1c5 [<ffffffff810b202c>] ? filp_close+0x61/0x68 [<ffffffff81035e62>] ? put_files_struct+0x62/0xb9 [<ffffffff81036374>] ? do_exit+0x24a/0x74c [<ffffffff81036906>] ? do_group_exit+0x6b/0x9d [<ffffffff8103ea0b>] ? g...
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of
2011 Aug 12
11
[net-next RFC PATCH 0/7] multiqueue support for tun/tap
As multi-queue nics were commonly used for high-end servers, current single queue based tap can not satisfy the requirement of scaling guest network performance as the numbers of vcpus increase. So the following series implements multiple queue support in tun/tap. In order to take advantages of this, a multi-queue capable driver and qemu were also needed. I just rebase the latest version of