similar to: Re: kernel panic in skb_copy_bits

Displaying 20 results from an estimated 110 matches similar to: "Re: kernel panic in skb_copy_bits"

2013 Jun 28
0
Re: kernel panic in skb_copy_bits
On Fri, 2013-06-28 at 12:17 +0800, Joe Jin wrote: > Find a similar issue http://www.gossamer-threads.com/lists/xen/devel/265611 > So copied to Xen developer as well. > > On 06/27/13 13:31, Eric Dumazet wrote: > > On Thu, 2013-06-27 at 10:58 +0800, Joe Jin wrote: > >> Hi, > >> > >> When we do fail over test with iscsi + multipath by reset the switches
2005 Jun 01
0
[PATCH] skb_copy_bits() can return err
skb_copy_bits() can return an err, so have netif_be_start_xmit() crash informatively.. thanks, Nivedita _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2023 Aug 08
0
[Bridge] [PATCH v2 11/14] networking: Update to register_net_sysctl_sz
On Tue, Aug 08, 2023 at 01:20:36PM +0200, Przemek Kitszel wrote: > On 7/31/23 09:17, Joel Granados wrote: > > Move from register_net_sysctl to register_net_sysctl_sz for all the > > networking related files. Do this while making sure to mirror the NULL > > assignments with a table_size of zero for the unprivileged users. > > ... > > const char *dev_name_source;
2004 Jun 22
3
[ANNOUNCE] sch_ooo - Out-of-order packet queue discipline
Hello! I like to announce sch_ooo, a new queue discipline that, attached to a class (or a device, as root) reorder the packets that pass by delaying some. Example: tc qdisc add dev eth0 root ooo limit 100 gap 4 wait 1100 This queue will create a pfifo with limit 100 and will delay every 4th packet with 1100ms. An stream of 6 packets like this: 1 2 3 4 5 6, generated by ping will be reordered
2010 May 31
0
Kernel panic is occured when multi VMs is booting togeter
This error is not 100% reproducible. However, kernel panic has been occurred many times for last two months. It usually happens when multi VMs (in our case, 14 VMs) are booting together. After we made three big changes to gain more availibility, it began to happen. Changes are : 1. Use *NAS* as vm storage for migration from local disk 2. Use *two bondings* for switch HA (with 4 nics) from
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2008 Dec 14
5
[PATCH] AF_VMCHANNEL address family for guest<->host communication.
There is a need for communication channel between host and various agents that are running inside a VM guest. The channel will be used for statistic gathering, logging, cut & paste, host screen resolution changes notifications, guest configuration etc. It is undesirable to use TCP/IP for this purpose since network connectivity may not exist between host and guest and if it exists the traffic
2008 Dec 14
5
[PATCH] AF_VMCHANNEL address family for guest<->host communication.
There is a need for communication channel between host and various agents that are running inside a VM guest. The channel will be used for statistic gathering, logging, cut & paste, host screen resolution changes notifications, guest configuration etc. It is undesirable to use TCP/IP for this purpose since network connectivity may not exist between host and guest and if it exists the traffic
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
We can only process 1 packet at one time during sendmsg(). This often lead bad cache utilization under heavy load. So this patch tries to do some batching during rx before submitting them to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
On Fri, Jan 06, 2017 at 10:13:17AM +0800, Jason Wang wrote: > We can only process 1 packet at one time during sendmsg(). This often > lead bad cache utilization under heavy load. So this patch tries to do > some batching during rx before submitting them to host network > stack. This is done through accepting MSG_MORE as a hint from > sendmsg() caller, if it was set, batch the packet
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
On Fri, Jan 06, 2017 at 10:13:17AM +0800, Jason Wang wrote: > We can only process 1 packet at one time during sendmsg(). This often > lead bad cache utilization under heavy load. So this patch tries to do > some batching during rx before submitting them to host network > stack. This is done through accepting MSG_MORE as a hint from > sendmsg() caller, if it was set, batch the packet
2016 Jun 30
0
[PATCH net-next V3 6/6] tun: switch to use skb array for tx
We used to queue tx packets in sk_receive_queue, this is less efficient since it requires spinlocks to synchronize between producer and consumer. This patch tries to address this by: - switch from sk_receive_queue to a skb_array, and resize it when tx_queue_len was changed. - introduce a new proto_ops peek_len which was used for peeking the skb length. - implement a tun version of peek_len
2016 Jun 17
0
[PATCH net-next V2] tun: introduce tx skb ring
On Wed, Jun 15, 2016 at 04:38:17PM +0800, Jason Wang wrote: > We used to queue tx packets in sk_receive_queue, this is less > efficient since it requires spinlocks to synchronize between producer > and consumer. > > This patch tries to address this by: > > - introduce a new mode which will be only enabled with IFF_TX_ARRAY > set and switch from sk_receive_queue to a
2013 Oct 08
1
OT: errors compiling kernel module as a rpm package
Hi all, I am trying to compile openswitch's kernel module in a CentOS 6.4 host, but fails in rpm-check: Requires: kernel(__alloc_percpu) = 0x55f2580b kernel(__alloc_skb) = 0x25421969 kernel(__dev_get_by_index) = 0x6a6d551b kernel(__init_waitqueue_head) = 0xffc7c184 kernel(__ip_select_ident) = 0x848695b3 kernel(__kmalloc) = 0x5a34a45c kernel(__list_add) = 0x0343a1a8 kernel(__nla_put) =
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less efficient since it requires spinlocks to synchronize between producer and consumer. This patch tries to address this by: - introduce a new mode which will be only enabled with IFF_TX_ARRAY set and switch from sk_receive_queue to a fixed size of skb array with 256 entries in this mode. - introduce a new proto_ops peek_len which was
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less efficient since it requires spinlocks to synchronize between producer and consumer. This patch tries to address this by: - introduce a new mode which will be only enabled with IFF_TX_ARRAY set and switch from sk_receive_queue to a fixed size of skb array with 256 entries in this mode. - introduce a new proto_ops peek_len which was
2004 Jul 01
20
[PATCH 2.6] update to network emulation QOS scheduler
This patch updates the network emulation packet scheduler. * name changed from delay to netem since it does more than just delay * Catalin''s merged code to do packet reordering * uses a socket queue''s directly rather than layering on qdisc(fifo) because this is used in performance tests. * adds placeholder in API for future enhancements (rate and duplicate).
2013 Feb 12
3
[PATCHv2 vringh 0/3] Introduce CAIF Virtio driver
From: Sjur Br?ndeland <sjur.brandeland at stericsson.com> This driver depends on Rusty's new host virtio ring implementation, so this patch-set is based on the vringh branch in Rusty's git. Changes since V1: - Use the new iov helper functions, and simplify iov handling. However this triggers compile warnings, as it takes struct iov while kernel api uses struct kiov - Introduced