search for: skbuff_head_cache

Displaying 15 results from an estimated 15 matches for "skbuff_head_cache".

2009 Apr 09
1
[Bridge] Out of memory problem
Hi, I'm using linux 2.6.21.5 and our kernel is freeze. The problem is, if I create a Software bridge using $brctl command. and add two interfaces say, eth0.0 and eth0.1 using $brctl addbr br-lan $brctl addif br-lan eth0.0 $brctl addif br-lan eth0.1 and when i send traffic from a host connected to one port to host connected at other end, soon all the memory is dried up and and kernel
2011 Sep 01
1
No buffer space available - loses network connectivity
...========= All my investigation so far led me to believe that it is because skbuff cache getting full. ========================================================================= PROC-SLABINFO skbuff_fclone_cache 227 308 512 7 1 : tunables 54 27 8 : slabdata 44 44 0 skbuff_head_cache 1574 1650 256 15 1 : tunables 120 60 8 : slabdata 110 110 0 SLAB-TOP Active / Total Objects (% used) : 2140910 / 2200115 (97.3%) Active / Total Slabs (% used) : 139160 / 139182 (100.0%) Active / Total Caches (% used) : 88 / 136 (64.7%) Active / Total Siz...
2007 Feb 08
0
[PATCH] linux: move back skb_pull_rcsum
..._rcsum(struct sk_buff *skb, unsigned int len) -{ - BUG_ON(len > skb->len); - skb->len -= len; - BUG_ON(skb->len < skb->data_len); - skb_postpull_rcsum(skb, skb->data, len); - return skb->data += len; -} - -EXPORT_SYMBOL_GPL(skb_pull_rcsum); - void __init skb_init(void) { skbuff_head_cache = kmem_cache_create("skbuff_head_cache", _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2011 Sep 01
0
No buffer space available - loses network connectivity
...========= All my investigation so far led me to believe that it is because skbuff cache getting full. ========================================================================= PROC-SLABINFO skbuff_fclone_cache 227 308 512 7 1 : tunables 54 27 8 : slabdata 44 44 0 skbuff_head_cache 1574 1650 256 15 1 : tunables 120 60 8 : slabdata 110 110 0 SLAB-TOP Active / Total Objects (% used) : 2140910 / 2200115 (97.3%) Active / Total Slabs (% used) : 139160 / 139182 (100.0%) Active / Total Caches (% used) : 88 / 136 (64.7%) Active / Total Size...
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
...ovec-16 480 495 256 15 1 biovec-4 480 531 64 59 1 biovec-1 1104 5481 16 203 1 bio 1140 2250 128 30 1 sock_inode_cache 456 483 512 7 1 skbuff_fclone_cache 36 40 384 10 1 skbuff_head_cache 655 825 256 15 1 file_lock_cache 5 42 92 42 1 acpi_operand 634 828 40 92 1 acpi_parse_ext 0 0 44 84 1 acpi_parse 0 0 28 127 1 acpi_state 0 0 48 78 1 delayacct_cache...
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0 xen-skb-512 54 64 512 8 1 : tunables 54 27 8 : slabdata 8 8 0 sock_inode_cache 115 140 384 10 1 : tunables 54 27 8 : slabdata 14 14 0 skbuff_head_cache 354 390 256 15 1 : tunables 120 60 8 : slabdata 26 26 0 proc_inode_cache 449 456 336 12 1 : tunables 54 27 8 : slabdata 38 38 0 sigqueue 27 27 148 27 1 : tunables 120 60 8 : slabdata 1 1...
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
...440 560 192 20 biovec-4 284 295 64 59 biovec-1 32822 41006 16 203 bio 32855 33512 64 59 sock_inode_cache 122 130 384 10 skbuff_fclone_cache 46 100 384 10 skbuff_head_cache 620 920 192 20 xen-skb-65536 0 0 65536 1 xen-skb-32768 0 0 32768 1 Cache Num Total Size Pages xen-skb-16384 0 0 16384 1 xen-skb-8192 0 0 8192 1...
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space
2012 Oct 31
8
[PATCHv2 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 31
8
[PATCHv2 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 29
9
[PATCH net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 29
9
[PATCH net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Nov 01
9
[PATCHv3 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Nov 01
9
[PATCHv3 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to